var/home/core/zuul-output/0000755000175000017500000000000015135643370014534 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015135655014015476 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000321536115135654635020276 0ustar corecoreYwikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIslʹ@Ir8)g givw{U/{˳?DޮyϻW????=?Ky--*S??xE[me\|E},/۬vsu77o7w_?~3c\~EgQgBeL,W7lM}*L~FjHe&}}7y#)a "b BLcY\wt\b].׋s.@_H0BQʀCx~JkI|9 E5o? fmV?tT)x[@Y[`VQYY0gr.W9{r&r%LӶ`zVToz2¨(PQ 7Fh k0&S V1M.*8k(n]r0^bHdl!و4Gf#C2l;R.!$5 J:1*S%V!F(;FbDYըԹiY0s@g(C U o9O$ӞNu֝$,z#fۅVB} eRB0R딏]dP>Li.`|!>ڌj+ACl21E^#QDuxGvZ4c$)9Ӌ|YWlyKNQWs]8MKf, # qe䧤ꇾ3,!N{\00{B"唄(".״.U) .εUX-?0haxC}~x|Tt2H'b*tگVVtHp(-J CX<<:zR{܃ lL6_OމߍO1nԝG?ʥF%QV5pDVH;ЗH FkBl}3Byu wYgH Ύ%k3 H w^~epQС,TBp2 w3Wg+ #FLzsх Xߛk.׹1{,\ٻmjrܬi%g'˳_` \2XcђQ0FK@aEDO2et®W>]tF-iܪ%ҳt17ä$ ֈm maUNvS_$qjCY QOΨN!㞊=4U^Z/ QB?q3e~.اeIʗ"X#ϵW~V 9{Zp0S]UIĀ')4 B^R4t; *퇄u0Ӿm[o az/آs;DPsn4t}7NG GGG]ҡgc⌝M b/Ζlpah E ur C&`XR BcB~R2EL9j7e\(Uё$׿atyХ?*Wt9z\+`EΣ@Kdb5)%L%7׷f] gv6د>GD=NWCl.:f *H7(?PЃkLL(}?[kLd. yK>"dgy=Mb㸵2*3d*mQ%"h+ "f "D(~~moH|E3*46$A'0\;R:7A}Ss8 Nw/eor(Ё^g׬JNyUЎ{vFlţ@Uru7J0hIQ\%$:b$spd.ZrͰ4j8!O5goI&o'AdpY1DL^5"Ϧޙ`F}[5XDV7V5EE9esYYfiMOV i/ f>3VQ 7,oT5tMJ%\t=[ٹ:11:2`c J1bV_gɊ:+^,~0gj"A, rXr*0ngY.] <ʜ6 ;,9VPAHuŠկh=o{> *nacԇ&~hb_㉫k:%݌6od FN'DǭZrb5Iffe6Rh&C4F;D3T\[ bk5̕@UFB1/ x }KXg%q3Iͤ39(&ʤdH0Ζ@.CPS`ǎxiP(.T)#ia-yqo#fڷb]Vg哀%ّ sJV< XTtPmƄR$6v :QbL2cq킾rEqŢ0o|Iq+ ._UیRƛ~fON_}ߖۿm뺶ypoyמseY^sT3.sT1x9O6]t\DLS0l/LOKcQ.os2% t)Eh~2p cL1%'4-1þh[;:>OMk˭c NĚO_} Ywt `~ߛUIȏvl.4`P{xJUuƝ$~zLim @tBODɆj>0st\t@HTu( v e`H*{Ögڌ:8cN|U1,-N9 d?I [@3YNє0vmDz= T u)1 QLLj`K -D,(7N*,< JDA?VǞ©H\@mO~W-ce{0d8o \_֏>(*exBaEW :bT:>%:ò6PT:”QVay EE衢^}p/:F?}bi0>Oh%\x(bdF"F 'u Qx`j#(g6zƯRo(lџŤnE7^k(|(4s\9#.\r= (mO(f=rWmd'rDZ~;o\mkmB`s ~7!GdјCyEߖs|n|zu0VhI|/{}BC6q>HĜ]Xgy G[Ŷ.|37xo=N4wjDH>:&EOΆ<䧊1v@b&툒f!yO){~%gq~.LK78F#E01g.u7^Ew_lv۠M0}qk:Lx%` urJp)>I(>z`{|puB"8#YkrZ .`h(eek[?̱ՒOOc&!dVzMEHH*V"MC Qؽ1Omsz/v0vȌJBIG,CNˆ-L{L #cNqgVR2r뭲⭊ڰ08uirP qNUӛ<|߈$m뫷dùB Z^-_dsz=F8jH˽&DUh+9k̈́W^̤F˖.kL5̻wS"!5<@&] WE\wMc%={_bD&k 5:lb69OBCC*Fn) u{Hk|v;tCl2m s]-$zQpɡr~]Si!ڣZmʢ鉗phw j8\c4>0` R?da,ȍ/ءfQ 2ؐfc}l 2窾ۉ1k;A@z>T+DE 6Хm<쉶K`'#NC5CL]5ݶI5XK.N)Q!>zt?zpPC ¶.vBTcm"Bsp rjﺧK]0/k<'dzM2dk–flE]_vE P / څZg`9r| 5W;`.4&XkĴp 6l0Cз5O[{B-bC\/`m(9A< f`mPіpNЦXn6g5m 7aTcTA,} q:|CBp_uFȆx6ڮܷnZ8dsMS^HэUlq 8\C[n膗:68DkM\7"Ǻzfbx]ۮC=1ÓOv$sY6eX%]Y{⦁# &SlM'iMJ았 t% ~@1c@K?k^rEXws zz.8`hiPܮbC7~n b?`CtjT6l>X+,Qb5ȳp`FMeXÅ0+!86{V5y8 M`_Uw ȗkU]a[.D}"\I5/1o٩|U戻,6t錳"EFk:ZM/!ݛ@pRu Iヵvyne 0=HH3n@.>C@{GP 9::3(6e™nvOσ =?6ͪ)Bppًu_w/m/0}T>CUX\!xl=ZVM\aٟ6h㗶E۶{O#X26.Fٱq1M k'JE%"2.*""]8yܑ4> >X1 smD) ̙TީXfnOFg㧤[Lo)[fLPBRB+x7{{? ףro_nն-2n6 Ym^]IL'M+;U t>x]U5g B(, qA9r;$IN&CM(F+ hGI~Q<웰[, qnriY]3_P${,<\V}7T g6Zapto}PhS/b&X0$Ba{a`W%ATevoYFF"4En.O8ϵq\FOXƀf qbTLhlw?8p@{]oOtsϑ`94t1!F PI;i`ޮMLX7sTGP7^s08p15w q o(uLYQB_dWoc0a#K1P,8]P)\wEZ(VҠQBT^e^0F;)CtT+{`Bh"% !.bBQPnT4ƈRa[F=3}+BVE~8R{3,>0|:,5j358W]>!Q1"6oT[ҟ^T;725Xa+wqlR)<#!9!籈K*:!@NI^S"H=ofLx _lp ꖚӜ3C 4dM @x>ۙZh _uoֺip&1ڙʪ4\RF_04H8@>fXmpLJ5jRS}_D U4x[c) ,`̔Dvckk5Ťã0le۞]o~oW(91ݧ$uxp/Cq6Un9%ZxðvGL qG $ X:w06 E=oWlzN7st˪C:?*|kިfc]| &ب^[%F%LI<0(씖;4A\`TQ.b0NH;ݹ/n -3!: _Jq#Bh^4p|-G7|ڸ=Bx)kre_f |Nm8p5H!jR@Aiߒ߈ۥLFTk"5l9O'ϓl5x|_®&&n]#r̥jOڧK)lsXg\{Md-% >~Ӈ/( [ycy`ðSmn_O;3=Av3LA׊onxlM?~n Θ5 ӂxzPMcVQ@ӤomY42nrQ\'"P؝J7g+#!k{paqTԫ?o?VU}aK q;T0zqaj0"2p؋9~bޏt>$AZLk;3qUlWU RywM[eN/1x-Cn/S6%$ί=l nNdJb4ÓxBVQXodՔq[*ڔC"1Ȋ-R0ڱ.VhfFp佬)Wdڂ+ uR<$}Kr'ݔTW$md1"#mC_@:m PGEu&ݛȘPˬ-Ő\B`xr`"F'Iٺ*Dn@)y|tԞ!3Ír!S$,.:+dn̳BʺJ#SX*8ҁWח~>oFk:%Ѹ*?wɧ6a{t3%]_Φi~jZhubW*IakVC-(>Z#"/U4Xk1G;7#m eji'ĒGIq B//(O &1I;svHd=mJ~ڣUCOīpAiB^MP={MQ`=JB!"]b6Ƞi]ItЀ'Vf:z=K˞r:( n72-˒#K9T\aVܪO "^OF1\ę|t<%¨gađT&F6.gzOsO+1[m̴]I=ÛU3EX/w. uOΡ' .gygSAzٜ}/W6xՉLā(^ca 2tc^XS?irG#^ŲDI'H_ȶ;RJ&wj\v0_0/¬zHmmS2ҒN'=xAN\b*K ڤUy""&D@iS= 3&N+ǵtX^\cDKIIKWi^9)N?[tLjV}}O͌:&c!JC{J` nKlȉW$)YLE%I:/8)*H|]|ZE$F*#(G;3U-;q7Kǰfξ}?ke`~UK mtIC8P߼funl8P銗KDi'U6K׉೚5 .]H<$ ^D'rGD@cqm0G{ ̏hi|9Y"mmasSbb'Rv&{@6; KE.a<M1@g3 JӦܚ VWg?ޟ{u<<}mOyvy3h=~A[kMD,5 %sO{킒V3X"%q7<# '9݋l%w:9^1ee-EKQ'<1=iUNiAp(-I*#iq&CpB.$lɴ}S^Ox* L~Tb_,n֪r;P_共lv1ù=LAЦz=8;ckʖYz ~kQRL Q rCQ/R&%b&>BL(hO7zNa>>'=+KgS{:/UooD8q̒vvW3%9pM&jV3=ɹvY[3iOI4Kp5 d2ﯧgd||K>R1Qzi#f>夑3KմԔ萴%tx~z:7x}>yA>Z4Ӥ͋#+hI{hNZt 9`b˭`yD,Ȍ=6Z" 8L O)&O}w \7ix@ D߭P"~G YdЦhhC{[J{:LKH:(+C56YŴBiW) 3C(Pymg+Dn@$uSgu$m:8)pA{kI:BX'MH6@w钐֋H{xYEE>6n?~$%>Eü%mWO[:=[mw,*9.[G n >X?8Īxw%dT:`ٓ~:QO,j6j!yڦʲT:Pqҋh] H+&=>g| ZM;D8ܶb:" Å{2:;au 6:!fF+0#+̬NY"!1a7#񕪰%:r|o5ZnڧK?si/w qEU馥/No7~|n÷^5wE fWcꃱ-7.u/}tPTc 5tW6w/l/`I>|끹mQ$>N |gX ͜IH[RNOŻ !`^`Cԅ\‘{w"jFXqb;N(ˆL'!Hz 2L;n7.IH"ŐR [@@H<%۽Ҝ ܣTvvVUZ om?'4%hsί&˛Sy*LD ZmWCƏ_$aj?Q.6dHҝdOAH$77f|l.g߯ I;.K*!<=+"yK5S]4uF= hQhhPsBÅV@6F \ bI9iן{t_jⱭΝo؞m2i\y6St1X7QUV:;.1& ,5΀j:<< +OYO?78In'abXIǣO;&V.䀚EƂ@J?n>lgѨ@OĹCuWai4AY!XH p7騋bZ%d>, >Df~=)(')r#{.1qZ|ZǏ\tr>Ah}ʪjnk?p ^}8"OM%Eޑ 5@f,|Ά(*(XHsc`% s/A[R$קQ&Ę4.s"x| 76ү4n;[4E8#yrH9=v֍/8. ZsߴIJ>&I?L6i}Y^XpCًݽk-$pxbڲ&6*9mg>{rtD)OQ`߸h1UWwml"Ms>\΋"?|NKfֱn !qRD `MHVuV_K2k*`cKxuBG&24T}Lai 0Va(7K#ӊ!,ZDxFQO*lם>!4ӥ2 ]8â6 U`V%`!c%؎ʨTzrKh! c.}.D>)d_ 8rcu,wf2?Ǡ*_lDn}rauyFp*ɨ:UiM2r:9ct X1lmĪ o玓,R%!`hGT LYF#g<cm${|Xdu4tmtїUJ\~dc0KcMlf2?mμQ ߉J4WrSHTdp"ӹ'cJq2zPlX̯.0H!ND@UapVoGڧD5>H]f@!=߸2V%Z 0"G4ȇʩ@]>Y$ًF_Mm_Tt)ib+q&EXFu򾬳ǝ/RS>r,C2NfOjpcm{Ll9vQOT>9U;])>6JdbXԠ `Z#_+D[7IIjJɟUh ҙ"`"a ߒ"G̾H`6yiCk(OA/$ ^%K^+(Vr[RR1"u4A.1X0=7f/"(o9/L1X{]q`Ȝ/; 9a>E)XOS K9mUxBa"'4T[Jl /K/9,rlCAj_TiǘP,:4F%_0E5IE'rX-|_W8ʐ/=ӹjhO%>| :S Px„*3_y.g9| ;b`w NtZtc> ײ1KĴ{3Gl& KT1ZWX8?C]~We$9; -.D087?1a@P5B,c}jcGȱ WW/ @a#LA4.ٹ^XڋXٝ:^Izq. ٽƎDn6ٹBc5Lt;3#i3RAٽ9| cbpcTfp> 6L/_x 'ۙz7~w~);qU9GDT! 6]c_:VlnEUdn6UˇKU;V`JUݵޙEO[)ܶCy*8¢/[cչjx&? ՃJȚ9!j[~[' "ssTV2i sLq>z@JM->=@NỲ\쀜*/) ̞r21.y? bO]3?C!yw3ޯL_Su>o>&lrw&i"< :]_<<7U_~z5є/rfn͝MLmc 6&)e+n7cyy{_~궼07R7wPuqpqo{ߟ+[w_uOq?u-|?WS_tOq?Eu-L_p?Cz .e ϿO*3 `Ђ6a-`kIf-s,RL-R`1eL~dپ&+IhYRczr?㐟,v~,b6)up)3K,RLW"Qd9JgT\1f3@Kh% a4x,kA k ^d kYj5Ah𚄓vXZhX1xҖ51Y +Id ZZ\C| fD>hB֡#-$+Jpሟ,Cg:6 3 xH "}C[`ӨOAFn5ʬLHϰ:N@VcyBI#Dr. "h hg ۃm-qu>V&൘ G7qi#^tҒ[JI!{q*lrD܇Gk@;oI<5xZ4xM"؇'k!>V|lk'{d+ :sXӄc)?W`*|\v aVT0"tMًcΒVz]T.C$cEp._0M`AlF̤@U' u,—rw=3}resLV&ԙy=Ejl1#XX۾;R;+[$4pjfљ lݍ3)`xvcZRT\%fNV Q)nsX }plMa~;Wi+f{v%Ζ/K 8WPll{f_WJ|8(A ä>nl"jF;/-R9~ {^'##AA:s`uih F% [U۴"qkjXS~+(f?TT)*qy+QR"tJ8۷)'3J1>pnVGITq3J&J0CQ v&P_񾅶X/)T/ϧ+GJzApU]<:Yn\~%&58IS)`0効<9ViCbw!bX%E+o*ƾtNU*v-zߞϢ +4 {e6J697@28MZXc Ub+A_Aܲ'SoO1ۀS`*f'r[8ݝYvjҩJ;}]|Bޙǖߔ 3\ a-`slԵ怕e7ːزoW|A\Qu&'9~ l|`pΕ [Q =r#vQu0 M.1%]vRat'IIc(Irw~Z"+A<sX4*X FVGA<^^7 vq&EwQű:؁6y\QbR9GuB/S5^fa;N(hz)}_vq@nu@$_DVH|08W12e_ʿd{xlzUܝlNDU j>zƖݗ&!jC`@ qэ-V Rt2m%K6dX)"]lj齔{oY:8VmS!:Wh#O0} :OVGL.xllT_oqqqLec2p;Ndck[ Rh6T#0H Q}ppS@ώ@#gƖ8sѹ e^ CZLu+."T#yrHhlكʼE-X'I^=bKߙԘ1"+< gb`[c1髰?(o$[eR6uOœ-m~)-&>883\6y 8V -qrG]~.3jsqY~ sjZ+9[rAJsT=~#02ݬf¸9Xe>sY~ ae9} x* zjC.5Wg󵸊y!1U:pU!ƔCm-7^w]斻~[hW$k sE0ڊSq:+EKٕ|dvvjjy6 æ/ML-yz,ZlQ^oAn-})xǺǍ--qcl:WLg ӁvJ[ǧc~Of+8qpçco#rCtKӫce0!Y-+cxMK-H_2:Uu*corD~@N`#m~R:ߙ歼!IZ5>H;0ޤ:\Tq]_\_>e˲\oUQ\Wߋ47WwߋKpwSSۘF,nC.\UߋoVEuY]^VW0R=<ު˜˻ x}[ێ'|;c^ M7 >5\-> m-8NJ\ALd!>_:h/NAC;?_ξqĎ6xMY(=ͯl~l8V0٨T zL{Ac:&$ ^CpH*DW\r2aR|=(L X1|wrO_g ux1^^V2޲jMi^b``Q#dBxV#NBk1;DAV$"*1]Y~ d-cښq_y%RT8Nfֵqdv+D[ǒŗ JvvC;VA5H.R#lT(AI|Ɇ䢹 jSȋl;}Ǫ\Ǫ H` Q T#ʢL*q*mlʱ4ƪ(H9\\xB UaSVssk*83nϒ/yTH!Uaj?o >U$x{QU).&b,|"^XxQ2=3Ú|Y=N!p'9We?"'q*/*dp^!;:%eXvxJF蔔?9uFAL6,3" __s}bEkiEdFuqX^L=.\bٱ7BfA䇯+XŽ̘XAcm(8qs" 5.u1wFl@}-%Kڃb(dY| bQy$_Jo"|ø?4*8#&`>h< Zd5LHr:\2G4 +[y$-;Qפ귃_JxzMzr#!>-sS >mqpL?00"2`8NG>G' 0с7x܈X_2$ODϦSb$g>3338Δov_yT4WQYZP0E` O0MM#ϿM\B=ף/!OYDa=7#O:qIb\Oa[EBZI([0b? qd>5y*Cf(pΔ8d(Ζ?2wؚLeY-ȼ,YQK=@{X癸C-B Y~K|b+֌7Ʉ!r^y" '\3"&+YUNlR(^$lR/Ĥ.(aMBDu%c?;Su/TɢrUUլ]%XKUߒ\W\7&,'$U:L-E+k5)ujluaV%z Ƨ?4U,E&/@Ll7u/K xw1:Ww~atVh]&`Ԃ\ 8 JIa\>%`dNڟ2TGG ~xRXCy5,ᗣ3䕁/1i}ojM QO+ӹ|K,d)%hLm vA$,2hDVy)bVr)g.:nۜ?pgA୸Sx6Wba*z%;LeK'Y+ez+P0pԣ;7zG1/5Y@8[*W&G`*\'n-J拺>Yx/Mq ]o//Pݢ*iWi#lo{>I.MQłCzrZ,D)ޗ O' ClkjaQBi   -sq)]&msh\r9Ã#xj8 N?ET2Ed6PwRAuFtITkԔ (Oz6֍gssqތ~,|` aZqr%A^J~ }Zq 4e'((4|e',Ń=n߆=]3I8s偣OCa것} G8KQI&E]!MBA,d'};he_#kA-MitcQISìܻH:wO=((k?6|iV>= 8:QJ5y&˯IY7,MU]NѹRcc!rsNR(S]= O$y.8G5w%MG!W,UFuVaEX$H>T˅Q{V3X.\BM}.D$ *DS5%a4T0\qk:Q=}G@֋ÅoY]PQ"2xU*`~@NraF<+?F??FDY\鏅8(ۋswMluuo} ׮ ˇp?}|w}pqGďe^TE~]:;{G&@NleE:8]yMDPEٵӣ.ƒZ|J^bG;s3&6"xϪYD*g ńq݃'fR66o'8W23P֋Rp+ UH)7]q:Me{*<^ ŢH䑑uN+FdIg8Y*كȍK.<ߋYZ!E~.ТHfV8$G">Jhd]9K$q9o VZs4]< լL{oV2UtVlup$O7v5IZ3Chڮ<)V Yh*- ަ1R|w#%T`mB2j l@Q)T;MGQ L[GM6X0enB胟hj:-|#x!X=ILqX֨',ݫa:M^o4I6$1uS{^޿&)%kkS*ܔ¨KWI^;62E,bnCLg8TcWIma0k­VV<%麕6y.K `l`ڞN9b*<#hV fIǀ.AXu (V0lѻj>dôtIuP6"U'eĥ \x=f,dSs! %J.X N\ @&q-KQ)Lº⢭N,Y< Iߌ*V}Ag.DZ ]0SukSg~CMȂeDFjNQ_ ݺ.h hNƈV}VVqc#!rBtȎ<2|a xn2w ]SKtmGw&Ĉѳ.}hd"p&W9"Fځ{H:ܐ`OukavZ,PZMmk2ԖMɰ {X &Դ &j]3>D %T2h;M-p>@ޖܺ?0LMa0iS-} P]u"zKǰ w/+0*. "h])\zx!yv:Fݻ EWYwcH%o}]MQ,"H-ښv%G1s6bݵFl_č*R  UH]U(}΃ݣju[KE3`?/f2* {jL8[J;lW‚8fcءEm'̀XakQ(4<}Ӎ⾊ݕ71@j;&9ݬD՝>Z|b)k \_(P qptEpn9jٿd5T˪!C*ƎHS6>8!*Zޤ]t 鮍x|7iN\Թ%#B8b# 6ֆ:MN+HǷM+t?quewptmɐե."1cmDvunrGj]]jiYϺa, /m8 HՃw^Ȧ`/+E2+ r9 Daht,3ee.ttv-sHڌ67}@Wy@R*Te`quŐwHW %PYg;JZ-S39Z!7&^LwVyt+F5)^q=;3>1("Mw:qw8lld=]dV-26-DJ3&i=#Y$hj)y9)pP\D eGQ>w/\>a\Vu!nHj;}$lJm@BFot?nL/\h!%>]p-YJ^u8 Όzn bNMgItݠKD)!/?W Px0jR|x ΀hJ(<SM[{ej|ƕBx{-RifmȒhkK~4):Sq,opP?R_/>v9l+ii>ǜS:"wxZ"`ϊw%1y} Dy_9\E//&0G䣈&ɯtrI?2y{4Ǡ06yLu$=uzXy}($ F@p)Tk:5o84-fh\{[Ğ`>PWNZV 58D~"m"x 3>k:@'(9T %, pK o߿=7ONoN_xp1 Y7=DI:y6Lߒu3y}y!< a(1!S*ϺwZi?ӏ04hbLB;O%t, {KDg!sB{,0~D(nXv>xڱbS;nY̞S!r{P>"' ]@X$q D8A?JfWDPnT~$ i"ޗ :=__K@m}䡡8S*!kR{d ^=`mv:L=K kG8hUp$vN9FfT`𼎫[-T9rh]pC8B8,"㪜cxWI>TL8D:eScSܻ }'ΊFUr.╧-8QU5A^rV,pHk JY"[~"x/j3yLˍ\^w {K,N'22ILHhvd{ǂm#wYZ ָpʉ3B֨ٮXRR\$1ʃ )ddg#,1rxRLJVf?El .gjp3#qlQA1pzi粨_V_4"AɌZl/FQzD%WS5Fcz=2= F*fR܁%2+G䚒J@ ( *m0ݔnsJr风ky{uI)^#Jh1l\0d!qoԷ<=ɩT %+PQGFql&̋MK9cY˘G4^#jojN`iY⁺zYc9?tDGЄ PNGH'>m?MI!}'_ FwSTg%OQVEyk1Og㚌`#O#+i ք z즞Y_ӭ{8M-_a-Vq#xT{U-oTѤ@~ɮIDϠ_uWg PwB.z=Pzw 7 ߜPyO$_C.1WBrcN?MA<<&^:ߡq2`z2pQS~F~"\uYf.&_9I.K& (Bja`ߘdܾY%xZ:=!؈'ȓ;ր Ix ޅ`j0*!nVI=UWkf_`)\]DO)% 0^}6]A`rASȱ~=f2L$)wV ;y}ݴvg@̸lNStJ`jS ri XN@zc91%$O_iͱB_n\8|scaG*^ᗐgR0@XK=0'1z5, ][`Qa@4AVx Ղq.qʗXJv؇Nn \TWqY;؃_4N uqKo ].E]K_q[|j$FZ"QI?c(.aj l?'zVDq_t "6\ћQn@|e>>:_}ײ~d EӕLIa)uT7|^vYgq[7/7qw+lڔxjb(ۿ>[tG[r_.a*mn[ne|HW+`ObHn2~ zTdd {>NfYLQ U1KD\!?qS?n)JhS_ [t|=Hۼ6$}*z=DWe=7֭IMiC*Ƶ%6:K*} q-2':׃C8򦸉zP[IDq"@L14J4ee`4m jӛjYJY`51VFY)+r+x$';jN";}5zVm} kjuX\M 0U.B2WUV:xvͰtQ<&y_O 竅 jMlk}gzR`mњqa蚅E%FK"Zfڹqy)f7uyDEU.[=iRNrxtk0iJz`ʗb2Au(&Vpxzd Untp{A<Nd]Έ,̕QNB)P#R{U13a7J1|Lk=,.Jd֕'5^kM*Z66 sS‚oœCRӚ,34ؾWd&R@&X'J/MnqtAb!BxRF*J84a= Øm+SmioD2\E|+C?fn0;y jG3~SXƷB#x,sShru46^žLx{H;A ԋGɦ`+(!'#L wu器Ƥکh68=~΋4“6u`EO%b;7h9 d=>YNaKau|J~P!DtG O8Xsh]Kscǭ+S^*/2T:qS"9$E_4EJ'bbaЮ h<Pwo'NY[%m ;'"sE7(#Y4ɱZSQ/N0X|߳v;Y7+ ߮p YG˼,) :p8b$]>4C0)g ٧"!}1F; 1R@hpF4W;`=$a*ˌ͵9l`t1sF=k3ZU=ʒYb$aui%ˏ$(j7R_hO()@Βq"Fšؾ$8m]8X8:t Ss8#=B3$aE0љW*5n5FHO!/u0.73!": uZy9!Èa+~Ήj6Ff 0`fq#!;4:pY @1.p5s@ͲčK9._# ^ZW:*Eζ #Btې&yHɬJK)ƪ >0CnᑍJI8~OJO6 ⫛ro9 p@vQ[e^FÔ"gks Xt˗" !p@x%6r sS~ H o}8:iYtaZL#9$}&qBY5a]#:MZp˹G|hwGfBt?Ӷ881x' a$Dr # _͕(}h%e(oĜ1X߈/* vrВL7ŚT< pa$vz7Y_\Q'D^C#|%';@RUD#)j Vg^Js3`$w-wA=xZZݲfQfbe41D&Dl$1~m >@-$NP'!5>Jjp`iHXpcmAHb$a܁tmSӗ~17KGgW6O`LEZhܬ n.L5ᴰصӳ=*D򱹐Rxrqp׵7'[d̵ qIF榡sZ &)$Q9c)dZI>2bLRILLռکOꥦGӀi qy#SpCg3[bfWKA[5qs3kGkѩ$X4Gh2?bt6iLF00)XvQhX dI TנL) F!L/$8fuTFpL(<` g\PdzMk#BGI\OsAn_Cq$DI*w so}XӢYU} P\Bc b:.JiCeN.c4 P#x"xz?P8U̔Bs1pww8t:1/3::v` }f4RZMbЍ8XD-e->.zp4d`:[|V {wL©_kSg(Ta87SBo=W[RҲb 61lC{#^GS"kV-rѳARKG<9t2 z^óvqYqW)315ݯG&ɔW7ݨ Isޏf=3NCt8bG[z[b zےcCx]5i0xwhP딜U-]&@fEh?Ƈ~_$Y]Bm@]M_Ji7;n,"Ap0ʵT3є÷󙑩I*K|p ) (`VmaCY``M*,#€{$8fP9HK۱jieKTp N\*#c'im®Y5=]1RLw=PRV6LCbpõ+eڕ͹x3^{R i-/Dy '~kL6cՂ*!; F̪)lB gIpty2wZ' bbqiQǿ-+ʮ0?dD]{=0.\\%s^8SnpQ?7힦(`Nk[& &￐8xs rGW{ˠ%;2Ұ1N,}T&80 yLs;+)oѸܭüO< uߩ)"7{{O]0tw5և K&ʾ\J,Y -PQ< ,6{inШ ~^, ArS }O{stw-tu2nȱ!ju>+$Y>+r\vQ)܁}ڲ\v~oGH+3UV&řsL:O?,:h$e.xބԐV`/0bH7/tިB4<[2M-!+*r#iDg:"~DLunb~𺧑Q`fiprT)4U}Pӧ"6b$]݁|xAݪC%JϪ?kriw%8n0r򔳞kfCW7B2'X$OhߘiTE̢цU=%oIJa[hp\[#b(8Gk'|I78.XY2^1e>$C{ cVX1bL#"l ,*H:{GwIh1ʇCy2!>:㗇Æ"bp 4?F I&f)1f 4 "Y0 bv_#(xzctJK0J-qd $gzn)F`.p@' p#_+WXCqFXU5‚}dPIt9$ǙUXqDE7FPrOo\q\4OgwtgpY·XӜ.w 87 &eFLHDOn҅O:41t-jpڱI lȋ~~!Q[t jpܵ 7ف9('gſ<1 _HpOLm[Q5=zpHqAīuc!zyZm70CJv-JrI=anDׇ[Ke$_,2q o<-1k9 `&Pwt(z81fȫExŕg RuBOw >r, h9T-2(W"Aj\*-B 0pʔT]A,W__1?a4C\,;qk5jGzp9camE^(լ(4, SnV spQ(Ϗ 2b /!eڜVbpa$]μGIWÜIu oGgpoGՖnv#`~ܪޒy#Ԭ޵EvQw$8J{{HuPU؃+>ҋNoB༹efjjN΋j8`&_ھ..r(OC}9@awn!l;$25*D #f`Tm/ߺv$qb>}$.ˠ+'Wv U>}'G'̏qs?P?f _Xⓔl3;]yZ>e\5/~EOOx~ fOA_MG[Odz"?ѿۧ//E5`{S0ygu/u2>:s|!NwmL*yɮR7īQJauo3dR∮8HƯ}$rp_YW3up{NÆF~?ظGoqG^y!&ٕyg}?Dݘ>;8>aϡia}ka"Ba {*⦙N? F=?@,P.R.5q;ho]M&q8sPG@(ʶ`DZ>.A?u&'G~<OP9'38RHva7D ,/na(C@$Gh4Bث;] mp[0m߂9Up"~]1sO2s/M爧Ge[(PL&^;!Sʽ)9Lvx"M9Iϓ*9iwno!ĥ'٪G['3dl<ZJ}|_աVJJs̙_]w;I?zW߭ Кy~ԉ9FzW3&{# ~P߿ᰗx~j K6<i,NwpTU^5B7W09B:?Dp˵,̜TWeOՅ:yӳxⰄM>HaQԠ(*,hcHdMZZV;,-IŒll.pF ФG+_("ocb}3pPF +CX Jj mU1ԡU&jLyQr= ԪQmT5Ivo 0hheay,nVAjd @m,Ee iruB6GhhݢX3]gD5v'mlHLC(n'Sis4p(FBSF-L4h]hYo-IS+Uh3q+s5V@uON|,e)#֩4$$T|q _by`:9h\;Ղ3ݓCWQ(L.׃i Ti,Տ: ߧ])i8BR5\+AZ_On6|:+_vk}[D0N@@ :z8I^>4S/bZ >~h;~RF:gα@;gtDa[AΚ},kNdII\4YƠ{3ai,@n3Ds[t nwB;RBюV7Sܴ vz; G@3pņwW)#.KѾ[wL\ yb1f-Tl sܜn\ҘH \q4xHytWߦKh]FH4G(Nb!̊+ii Z0)heg}[aCN$[>.r jo}bTxL`# #"Ş#a,O4ْ>Š5YЦ5ўtQܴ 4ߖ3;WHo /[~DhݿESW_6crrS9zϽ(rU@QS3Ilr{^c9L-(N`"wX=Yr L"b[ZfI"!݃"@^\|6ZWٍ v`vO]g6SNVjTe}jzf8Nl  ^{tj5ꮗGrecNOg(R] ^Ie3.3+w4>IU?|rr$Ż۱ Eo,D\ hX0`0 e&ržz9+Gqd22 s佴y4&FV\&ynm]\NYtm!Ϯc㢶@ g,.Z4i% NH/^uAp%,~jY_VTus”8I0&`< ls6"l@`Hhʲ^Z^˺Y#]q8},3Vqo`^`s`Z(1͝4s a9ZAenGdn 0d,{T%#7.pt*.F3aDXl$BfNAж'+KFݎjIFK)N%+N3r0M3$B*rd\߳r÷xG`U_7@%̋v^9J:#۝ƘWE^u&'S} z8ȫCR!\ Lt*R@z&߅қz.;x d[z!x!s쩲.%艳Qkh8!rlt6suDu;\ӷTUɋWZ(8pc^zǠA0+ PJȓ[)Uɩe37ruR]ξ ݰSeqF!洍hWpAQϧZ;o\;C3,L1:o6oA EȎMQP}smWnABdiFkRB#ΜJ ?ȠtrϐWTWkPd+͓8SP ] e4}`0гe9R lm/Ex`~:3oFSMQ)9n|r`O+MuIJT킂`9%: e$D2`[NJ2FAֺV(ߠ6/K!\/jZ6[nEa`ErYc0G]Lr idETkw^h)i@H1ۈ0c2ZB߱\[ VNJ RH9k"W&6Wm=_5o̽اZ8I8]ԃ=BBvfXHdg9fqل@k%\&.\9BM',[H-)Y&HfL})KR12P0 鄇7fuQ 6[QpmŹҎҀWXLs&]+9 ('Y7ه+ayZp^̩g-!7[Q`o1 kI&HkrIF1!D%F [ H4}'GQjrNBUD*7/Fw~TvVUp]Ŗa #&qYSJ` {Gx+zݱZ҈F.1Z^ D+}uQ~@s7bmVOOf<^E*vll x|7J_l2WBF䑈 sܠH$uc" ݔtkv}Re$=*s"_fHjugK0iYթ|H@v\:ėf=-ly2Vhл*4vMY- ǧ߈ʖVj0Wyhl~k Nj+&]$L~ԉI[}3 63!q!NK|08uTRm~vgO`C#X)ψ&9arv:B2Qaӿހ?{f^]zvTNxX x]bun&WOÛ)? Tؓ7nnJAo8v߁Oh<2Ľant TRx_au@#6>H_kohYc'9vank?$f߀q>+y'(^0e#:i!n$Smۜq߿hqk5zvK+E*"WT:ÛZYN):%ݹ J: oI(a*$bޙ@IS 6WqiK*`$H (vU,z\|܏`0a 7t!CޘXӚRTɄɤ1˚zZG݌rG2ajaR`bQI5RI6rƭSXr/z5f?.@P0XuuQ([ (a%r4&X?2΀Dt8x~q h57[w z@Vw=ecTo)IcX5|#=W]xʤύ#m)+!Iq21?P- #fWz5l'6l 9#fۀ\RI3pp*g. ؁!CKAalV`~㎘O}LbT* `5bp'Lbp^c-XQ0eo)LsO 4]&dAq<U3r=[ǙUq$b*Lhj<:9b(*r YBa)HII4PF̩HEǏ"lOHh`cOFh\kOGh c'}dMR$:漣" BP}&%Zҝ{~cJt4mtijy8'W R ׊|7!3NHE4u{fDI=66 &xp rSS4lKF̞Io=6Pa^ %5`"ž"'0W"5h*`1bkAdh!pCBB)to.4]gMȴv׌CKN^]ZK@Wܳ?IIwɈe'Y@0d4 ItC,|4"܈<&d _flD]b6#W6*i%NP9vDH\`է7 V)kLjejkA; gf[qanqsj8Yr!e5a$݃ьʣ~Zͺ"8z4f[FN 'm&׈22o aB"wZw%+fTèv#NSwgA>r";- ~p8?$;z5KQ'9=!|(` }~ FV;j~XiK;|W&XG\]afVlFn9Q+x}58qrHFUѝuØuu[$SJ38V3fP8׊s}BP d+@VsqZҠ8%i0Lt+ 5ORЊrqě.L<*HB55T4Su6YJȍ Z6طQ>__F1v9|R o8+ƽw)8zC+̐3~X|vP_gV+JrRvufrW7~}?]J,ѺR|6 A.>_b*`c@~7^.r\d ^i>H>t[ "w 8@sT+ La@9W*1-Tag@wPRl6d4+wUh_(HMQ,޽i3h}B=@z>M 7||}=vLivZV&b6#%Lw%Lş"aU#f_/:`eD!_ɗe o#{>ҏBuO4{pU~\Q[҅pU…)&|6AOnvYZmDy0NQZ rnѻ2`oeajV:aȬ,UGX@!HIGAVo'4@AUoXPqrZx#!f`f֧WL]pCn9FȊD^~m1UUgq!]nDe: pw\ (VL { O^x% ~_a0xNŬxGk9ӗTt7\cpɂeJw%\}qlLƭ:SI{ԇmTV*xqCVj#%MI9B K3[4ZsO;{A1zId1?Cʊ{ͼt TOb+le}m>(v 0u 6N=pOdl@MVq+v:+%gA܍o {v3)u݃VWJ= 0g>ͥ26Oe)bqۂOTb\FQ~ YCfxKqGxK "?肃 2$qgV3;Au=j b) i w5}ӧO"aqe>J %-߼~ӿ~_s`lpXFf~YRY W?aܳդxᮛ/< ]~ע[;4vM_fK~}o袬n6}Y=r[q>)8 _t%mUN]~>-]tPllw|>*"j~[4k",L '!ƁÅ/ji7Ҙ(1XO) G&'S+#\dU b޾Y#J R kD0)[Qg|<[G㥯q6'.8˥3hp$}G''ٗIugip#z-rVV)baq-/5u>)_ʫ8"®μ˖ hU zMdh9l xكX F9YeN%cf/|nO(Ggam:@@}>=̰Ѵ1GJOC< docvÍDz`[W1ßHp%e;qזk`׷!HNy:;qXubws4>OOlϽ:ٞv d?FSP1f D0r I #H36]!:]̳RR̝bnOH*e[j6T"UBȘFcE34);(箠8ٝ;8"n?]]>^6?Ҩ SmahnxxWGyRcPܢ̧^i@܅쳟~dc0(n- \d@eDt8BA 3MHc »/:}3EЎɕdf.Cעu:xe+w=41v4x#YdCp YrKVyz۱e5X=.@0. TTV0\R>T(p|e5cuVF~yGl)Qijq 3㡈 #L謈 ze:WX]y[VW  ¦> b½WiÊBbTS0po]r΃kޒq{G~E0[F~HlNrlTuU(ƒffwaj]jn5XJ,DVHzN(TWzU$ߥ+dE#2XSiUOV-MՂ*{ʩO;%(CkJ"ʰ*%Vƒ-UTE8ШYd9 +Uav aTVgm$j%3[xW;o+{]-W`(+Њi[E$'Ōmޕ4/_QHgfc|yl2D*.@4K>qs{>c7g`to|l3<D< YMcqSxWTJǫUɪPfB  |3=8d824jm(,w=:~)fHn;'ix]D8)<'>T#  <(KE^SAwŢW{ec_K_f,6e4_8' eJDˁX=0w^ԈEfCdhT$ g@$Hfo g9G8,ϨpH1> <=R"NՋ&Y7R+g%S@7: ޯ96osRA n2$$,*1V:QsqxGkkS_?O Q # kcoMj7O8/#ÕpuZn򐊭._8&?Md+|;bi?{NMԓDdy$A$ϧkN#yVPV@9D.k"n-v'. Z\(ې}IB|k YjfuArΧp}&ky38r`~Y_--NzQXs;[XRuٵ-O Z(q[`tx]nVv 0ksBk~IZ_֢\$^B$JC[=QI@кK}x1Łwd-bm.c1*5VDE]GklZ I7_x N Cl.V8NKbYT^XJIơ'8zb;H:WFՔ8pkb%,Ƃdh8P]Y·[峯3#1CçzLu,`W& Q95>x#=zl+wlA.fmE2(,,&.IPT1USơvuG,rT/zNd^qtt-dzfh-c쀭Z~-:Ma01H;MYI8@%kԗ =N7Ӹ]U|;xD};Ā[z[ w`N&[w֮`vfa7.;pѻp ==sy9݀wyǹ"APPn̋Ӹ4EEC-zq=w ׵Jp`o!Ǚbiq}ۏM;9cfoGXCu3G\I!5Cw`>NWޝ,@Ughl'ӽ 79)yŧ*i{?-yRx0H1MttU"F ADP f CfV07K(n8Fx±7}iML_6lևK+/d:z!EK)f4ooWI B/'2:O*8􎖅# ˞q͗=Q(c NȮf!ߤ5fĺ:oٕCp߇ζALjhƩCz9q67 v_=l8Y/&NFo|u7-X~[?(daJ 5npMYznOoo"}Oq/a`7HY筭V tJZ#ܟVo10*}AWiY8voTiN>~uKuR9k9 ܗ /~)R|ֿ̯.| \׿V~JtV+XqOa-f3Χ+2|<.׻49z|4-W<ߥd pW܁/M&ǣMc~W竢p6_}nV87_&1ӟ=~5/o6ɯj$Ӻx8s5uq>qj~n~]tUuzm y^?]]_e;ԧVƺE9ӛe|/߆!ß?RݙpT-*`?֝dOe@(lܝR"ۣ&Gt]=V`cTX(>L~Շ7(aXighJSC`,v!T , ՝̊siD*jFdXJ)D` IoDk?Q]Yc\V񼱉*:ս㫘汅r1#3?OGuSH k kHUr<LsǗ L5fTMPޥjc(nLeY̧$LJb_<|iq̅@$K3F#g1yޡY1zP.zצXt. K:3ِޕim\O^vd"PjQJkJN#7Gwd20~?_.cF'g34u\(,1Hq_]-Ҡ4սeҤ``}$u\BweyG46q` KtǛ 1r*1mcvǨG0z,(w論t pje8HHYypFZ9z v0D:Q S,U1 g9fP*Dwej|F 8I`)!18+yZg 5u +kcnT3 w϶^-p>g^L@w*V_VzFlxMCĘCvB?+4FfhdX ?Y[i]x0Cp:X xW%̸ n%`A ]UŲ͑jݜ0޷S<fh bMSiT>/풬Ki3F*'I$A@4K>hp>jLGeb0aUf5s^XbI1cu-dT'c@AhV;r[db*l_\D?F5c}jɿP1宍UP!JyvVU%I0J#6+Q]u@k>C=93l|II#wa~4\ow#Bi k`ZoJK?ݼAwesY~1Na16)!)1I=G501a 1}˭=/68MX RJi"J4Lj6(3 +]-J0HY&F_WEJ7p¥JSrC1]hQXD`eY$}^/}u 9T%cNYZDVС= vM~=i?J[}g1jghy=DW <MI ò XKTQ26Fh( 9׏-ZM:X ֕1ѨS8e(,ﱾOkl*BG C SfD J.<'*GhrxWk3%8\x24Ɯ`̶hq tu {hjx ܹ)ނ㹽Q p4P]YhdLXTZV61#)A6bVM`*+SUL$bqPeseq]_i&Ȃo2@MyA @Rݶtt]DZw983SbU 3(*##V:ԗX(o, ܖ*_'FO;FAh8'D1b} *-RVLv uA=q1PPyMK^Necz&;6(%l3ۖs?dVI8wsm}4AgGvY6]GU̖WptZ@}YvYk և-l*ͪ"Df'ۮ_VI/;h}tեsJWo jٴqJ^|ݽ M.8O$NuO6 GZ1~݊(펷/;#B3FXA*!92gvcJOb*S*#Ϡ*a~Fq ʨ1:(GC\4Fqh~GG4FFJWUT?>"H8Sr^!;N `w a)jU|fV?߼O^va6}zpb#n NH8t^.aU'T6=(:@c 'L;>-R2q׏vqJ5iMBSƝ9NBUEIy?-5 ѽFA}Y)WTL 5%@cčA>kD+P=@ƈITZYYSh$UNZ ES m%.4ط``e9r g9hvƈj3"uѝ c y3U j9Yh:)#<ԭ3#a˄#uxp5"uj#S,dø(CDXn#9jt*ʃܝ+0UU_ UXB93*3|}IJMKJPoA,])R+F)ύ-}6G7ޗzTo@.甌 cLg$m׈le<"WY 'A8//#{)8fƏz]11` bbpH_Қj6]+@m $JUx#j84V9d| ",tvje!P;hGA$=y<!;GaW 2ck p}V;]ev 8$#9gzJ.z+H A`rHTE&=>JsB$?A Vz{)GpA/1P"<@dr IIFjԴ&QغYSc*AњH™p)v}e7Nɥ#F 'nյvcL4GB$:b+0n6(Ť1RL;ӰvJnt" JOvf*L' &=r4ǦA??Z#b*Oo⎂$izؘ1a#Ny&lf'C8G\हiШ(R2$;#q eMlgUc=@=-?tNnãF1'm*\"]}Ң})n)/ky1*9k .s^4*󩭳4nĞWG] T9oԉ"[>2$|l kԼ-ꞹ-T5Ԡ1R 0]ƿ7(2;M}갽=enw4zr0,^x<$(wz#isxbh v/ p8H?"EKoRG4v N3|k*߷ŮG-uǑ{@JaNìztю98 SB\?Q x=7BEf!&gƘmNsg堦`# cvۀi33mkz .LctT|l T5FKXB9uZ +A EM}&}Kc%8ZޗJh3ȗݎ_ ޼7+'E␚x>4|j>,|:&1S#w.&b"cƈ=j-m>%J3r,u݂ѯ7`_ 5+V;5ćyZ"|PEg# udA}Y5"qWY@n 9ұ,]!l 7@= o`u|3w|#RAfjܹ RpLkwu>5ϷCKK)^2eپy7l2iZ#z?Yьk.D#"@MQX󑰟 8c3ӱYr:DeӷtzN_zúָDzgw H]pr7՘{׏hm:W?%%BI")ntQQ &uZg _mD Mc+˜D^E>-t5C]|Na$D")M)}q[k= ~UiDEJST4 w t_:R*]zI~5H^Kh|qz\z0)ݤK`؆y gsb&< z@AZp` JrY1+b?1,j-,Ғ3T)_EZkynΖe+f5jqC6Gii'0~QvY/'ͯ?gr>uCL{M/g$5oq 0 =;ߟt2%ȣ;[rLr! &PC[I:[M@N\qgc%fн~qAinKijYҷym)fki-Hޟ9h$牥 b8.BdR(!LJ0 T/ N wSą)Õ']@#?G&3w]Lk-~Nh.zHj -ܭ*cyEPD7⓴@S N`Va^ew'fg[JӲxHcd}$Ԧ^vMmDp4l3Wû+6ű>7|[.V!\5޽Pq)ڋ\Ɵ/NJ5  Hzi̠;:ri$CuC'}Kw:W,*0>0 ?oWOpQ# 2/D!1o* P{,[!~yB8ݪԡng9jy6&'KS uVvNӒ{gW5-NsYq&TY.1W ܅*rZ^(afk4y[̗mTlL {b #= 螦%煛[L)N2Ťtw*k?ǒA [n ci43'lِ О H̜clJg!* q7NQ!ֻ-)ES]Lΐw\sW҇IFSe{"]gxd@u?d9xܱ{T$I>KD Vyf] H̙*ǂka|1@#1syH(0grmb I H ֿØmb.h& y ɺ9T>8ŸgsC!Tut@UKӲC=NS a6@vڃ>ȅfΌzHJר$QU󅉁Jzo4܉CR*}N} Ia;;.5# E!S=܆RPjWFb =9 ˩1CHY _0h#FFoh$f`k9֑C pjea~ 3OpV3Vu/O-[[[07eaΊ-5SqT N̉;`PgWP+{~=1CT(0He*UeUD:$|9WΝ.1ŧcv=ޏAτߎ_kZ7:+$f=RzP:^\|I/UcuHWAtcJ!?sL56cBoϴ]/!ILI\ Jh+<@!Qϟ[w:WI'RMc+ܾmF+|Yz4 wHCf9ȨTAIlǦjP'`[B(e#G 8 @ bt>(!Cfb$ ϺX:5yy(8B~lAAe%~dlkAObjf5zs~dCv?~_Xa~8$pາ[Lmmhwz6Wz+_|AYC4$$/=QnY[A0er&=GT;!eHvx<܊Bg/:U7h *r@Xđ[AE@:%UriPplD)^$@g>ry湬VP]qJXgvTC8T;ُroʿ<}̆ӿ,. ȼh[ <"`?Js}x +?Noڧ2þz=K1|uߞfiS=Z/kg΋ެ/DnY~~<(s|_w> ^1 F5fΦ J^jNʊR ]~O @p߲1im/sǩv9W޳8n#Wc?aiK rd1&~\@QԴnc'd=ʘ0zbUd= zZJ _Y@C*H(.:?\K^)H[ < Xtw΃L߸ߺ宥?~fW]9 Ťy~=:t̐9\ς*O0@fѳLQ.&F_`;#µ4q(Jh,YD3hK7(,\C(AiCs's?YNdqCGk4;fnweDwmpVmFQB3ZE3ORŭ{VR ܸ"u.] J3-F? %HcKPfL:>b |G2b?&?P4W舖+t .fp0ӣnnF ?>aSar5 w4xL(SǭKi$ soIII5:QHbSej- s~6*dP!nq)zh+zŭ5QEÇR81!:9B74Q(D-yUSH0;Ϛ10I)#63:*]PØ8|8,PT Eg0v$?렿`Z<ގ]#mٻq߽{ԾA`+86wY#i}9;e{NUu)'%jlQȍG |@rDŽJAG%?|kN >1ǣ8zfc,]wA &WC٬^$$PUtSꔒ0TbKw l/{*^IpՓdscAR| v2ưPU][uUbDeU۸}7x"mF\w?ڶw=m722/z]hAa!G7a8,ւ ѩ["!sEyWW\δm{}1SAo갩I2+jhnAa(o8jk&M4Ćںr/Q$RjhP*G>ʪwu]=ʪ(4,(Ihi.עan  0:"c!}#Õ+@nKޫS.cd`*Vd: QMWλh*i˼ ]0-;rܼ -ks%.n˅K4 *>D':Q*5RP%M%{Hy3qOyv32ȧ3fbu91[P@/}Wn2o'cU(!^p{^֦yO1`3*$$C4g0I,x$ S#+h;ܚ18I8H-\QF^qp;p37l[aB(_eF&rq[E^+]C7Eg`4G+2hL}:(D1lPYX4%L$_$:cZ1sWjddt:;pj64w?;Ac0IW8\6P aa1ٗ\jFn 0v~?(@U+7pկ#p=U"⠰{T0փI|}Wb_. 9k2FQ˫]48e܀Fwi@Nin`O2.pw NuS%Kdik\-=J_\dbDI/$ڄm?y|b'+]٢soلCl0>!#8"Q[O jgo˺>Gw1kXR4VyIr ֳ͈VzuH<&I. *=V s26*0;LVG /UD I efIEd㧮I{WF⦱1}BQC{dQ3OK ck7P'MMDtVQHfs}3+ci>uv2sls;awrU)=*agwGQg$Ҩ,פߤx {jd*泴S CDEHԹiH[BG\âtSLAI[+20;C8Q1P 0\yaˣ 6$e,"aQ"8z%צ8c kh>\zm:#1WqBc[4a(A#@#s`ZwG[GeV4eD$&, |IN42 Oc|LRO3A92Jy‰,t򷕸=BqYa|P0^qda17WJ"ȂB"kûZx֎J<K<2iJrjaÈÌ$3b˚j%f_eiyDS0. S>_.{A݈?{R!4-v"PN$iZ9w"i6[=j'* |j^Ɋz'N$$qiǥrB$yf(clY`xG!۶ n?Uuӟ?W LT}i,8[0%"ZTQ3`@,M>3Ѝ_:dq %X3e !,K4IsQsIzƴ |WRž$"Mbk"B g5QVm#)ʸΒ´ݩS,QAƏ ZD[4J)HYJ`t*ư,=zG/K>q0IwwwI|U%g%Sҳ!8 EC0Z|[9u߫/6|斱l/\QNR@3ŭ]iٚIFr2#s2hxt1`q-O\ۯg X&J6/WK|0T ]1IVG;c޸5E0}JPl?܈)1|yyq@\10XAOL<~һ<IA~ j:P3ٖ#5:b!ݲ[`353 TU'OaJRR l(e骮D`K+fs~.1HpvTObIr8 ‘,bqUmH-6J ]OcTR لgaj# ,iyOnW_>=`e } 3x:ځ5ĄS_QBI_(Iɥj0'~y r P?Lԫ¹˪d]ִ.dxwi&ׇ$@ٯ9KT%HYgafB!Ȅm~S9y\LeqO4ITSuj*ؕk OuJ=gzL*{PO&ܚh!rhd"aR<34M(X3!8KhVZF$w )O[Wׅh+3:yw?7[,[s`yȮ!'u}39?VnS׋[.Ͻ?jK/KoL_Vj-^rԺ)ƽWZG[K\7Lv s@WpRƮVA 1 սv.4~0Ee߸ߺ宥?~fa`lZ9V@TP%l\-ŸVgJ*XXCz6b] U^v]@nfk<oL՛뮙ι}3ooA ۾ӽw\"h*w 3fZm@nm|ǻUx}su]gv٪KWԸԥJ\AV8~ĺ"\oo`-[|pY_7WU_2v!-w\M!ڮnjg׍/_^_O갦5{⼽Am,l枵ajTmnUyjsoP:M ܻцLa1bFu;ۻuXPu~ t49wg%ZqV7ҀQ5_\LYSfۯ+//ܗaԞ@!zsVВbvs3ZH_ Kw鬊fzmG5@7O^N`'tBvN~3xTgW:pUO8pe਌"FA.1aGo^|*8&9 B5Zf c:N&e-BirMHkE3(FE$XholQ?oEU: #18bxpXbT qI(%hP(<m:b6B#pÂyւr6}HDHqr8ƮN}$ }$G!7 bqo@hOgOn[,Qit[{~ȡuUl@|`xl(-r*J9 rBC+p\N cqse$,^.5<hU;}~ ӂ"OWsbá8#^EIK~:,.k>494Z1= z0%%sd4ٛ`ZC}VzfHу{GW*;R2.=b%XOnE,BFd$' sKɄ +܁9 #Sg3V2#+EP, A!Z)JAؚȸN9k944 g۷'$!ǣv\Z(T8 aZcİsLkR}cL~9殶].Kjo> m7֩%sXΨ1-`_+8Nɇ碎OE<,Ɖ`R(@0Ȓ PJt@Fbp4hbni!QoѹuEB|6ESSnB훣vsjiF%V`1jtx/EFwqU>P,Z淇i'*GŤV3$ǃ#4*rr鬽43~:;6h LYideQ!d$g>kY0Ve|3<2RYXiFo&jcXpM,``0 yJ dˆj1K 葳-,˻Q+Tc:z #R s0DĈ$E!ea4{J=) A3'HI˜ \`S{O ~3K'"gk*9%TVD—pI PW2C:BoM\asD\XU*;-M{\cdm&Oϝ+)5t 9c5wd:i"R9<F픛6~UTH:OwBjssb⎈4PdbA@LD+-SU1~lHj!u.}2\-l/s2gk%#58GOQ-%Gw>Z6ic(>B򦇓3*XQLbV'H cG3N-ZT ڨYÇzeG1@^00(!E%a?mwv%#58b80gA9 Tz8=b brȏ׋E}+r@Fbpw=E= #18xQ^' I3]` 5 !TZ"Wh5kx]C=6H NIuz@Fbp>ϫ۫|%xy Q&oP#gc9(Z8t!B6$|1i3æ qlB7\k<*{XҨ,1@,lQ=ZCEIN \( $%j(9Y. זICa:aNLXYeT!1%E (dDd;OHcQ}/eV4ע"#Bއ{t !Tq.:F/_t=" 8IHIPጩ&(:\Tk&dM7|z,,Taj9,&~1>}b:HqMWvnLsW1FmJ@݀һewgbPMpsY@7Q\/}9MWϿ|Un@?Wko^͛v״w퓻k?tcC. W"6+(*䇒7S90iIPKjjfGo׳)W ro+(L';J[$|5OMWr~i[ASS7禪O^OnJ ZW/< _w3(݀)m|ؼOUPJ.Uͷ~V]Y'{X}ڵќ'?uSow}0f`=r~q>+na>:jބ xXcXR`n]R2[av玹e.(穱ݴ̷_o t`lcd̿[多0ִto羾)n`*m2,\u+_e {SWβlrk?Uդ|}su]nϛ(ծ+zePճz9an?b]^xZ6mP۾o꿤-QMxʦ󅙟ksv$m:Fj5.ݼ,ݏÏ'uȦ2ͫ! >;b5c-@-}[4̻P謀νVSpaS@ifZTROg!GE2CxM/`fK]򕻝Ml>'y#={YnzD\jrӁgSZBv^NVfy4㛢p>f wh8u"ogZczj{aq*h0fYm*gg6P,e8>s}-npfg?k`{Mmާh1==QqN} 8N!˨A`7{ҟ^PPyȄR!Z)PixdWĎhsv*dAS\0*- 7SJ ֏B?vR(n>IVG P oxՒ=?(||ќ-F1/!WLtW>^ߊ>j0lZ$ :T iz37^=֎hD}VOalDtv mp m)\P'3dan/1sIXH$  YniΤ BnК EM0 Sĩ.匑qA;n5@5ieՃ юu6 !o!]06DRLDXHV0V i_0Hi|[K֎S ?朰T=ܜLb41lEk. $EY$9[#m(GG 9*sZ6?eNK:c5 knwOnQz C'DYa5**$=:\, .4t 15啨x=:*R@taݒQs컶CT+#FۿZ>B`_gKl:N [T0ȟ1?M-_Xan3Rli/-sڛǭhGУ@IY~.oЂߚmMOfd̳dO5[RMtVN\oe) a礵H *y6eYv&"ِ#;l>^?aP_<@pJNnlp}c@bȀ,71 uN6 I<؎cc8ǔ61[i$GAPwbR}SgpvJJlOq8ξSytbR|"МKO 6T/-{: c/ tXy:!E2k,?=8 Fyy'pnӱ}tPRe}p(ΦO?b'/- Vۍk뎏m#h{Mz/hז]wgHJd=(lV"9swU񒋾h4Mmֳە_JhcwK..jWG'I8-5,˷c9٭w}MB~򷶸u~hީh@h+φ:hl(JٚARG- k~L1+'r Qgx"qЊ:p*s׃/S ! }SLM&ԤLbl)3߆ cY!0:Re (6 8tK-Qo% V/fJ&6Rd%h"9S0uAIK`;-XƩ0ڢ@uC^c< g yf4J8ƒHm6eΩp8gw, (U DRřqS k4\ʅ}=%g„?pɯwœRP`ɺ/& 0?!OX 3 㘣:mS}>U*qKSfu@T63OQy}2"J©orE[~[jV5f3ڠZə3V4k GX=YÃ9&\$lsN{Lr^ Jۘt iX0 s R6,{d`c"bCج~8D+/~̴aY wpYG iۍRŒ#DHI(b!h\2|sI`  +hf뙱Y"'Rr, 0XPЉVc ;g<nEIܾ1g(`ܲe<Kxt}m֥ZPePd'C<4V}WG;L22RR `O:-)-$Z$5 41!™ki&sI)T亚IKC ]9F]$#3a.CS=M.N&YUFk߬Xd =HPLfo 8ͬՏICʢVJzV_a&׽d).V`IX~h$C#Lݢ/RPX&c{)M_봼6g 8MnDb&g;$́}4軉U]p2*\kJMXJ/wNm,[C`!I*K' h|3·#Bj6sA`X\N3lo>}Ż?z&W{ '7."i+ƯF/Yx0/~ch0lhX+j T^g\&ʸTǵ7E6q dӥ6T&_Я\ӱ{*,&=,TDC/kߔgzqgF)BD0AJ b2fV˳(`\湱\YŴ 0yfZL!Kz:W\t{lgNԀ+">[DBsu}OJ-Ơc4H!& @bGKؿ>]ZEd B]_cX#Γv@Xе@p6g +jG/_rii1->ucۋNb DTAsQsL(噥:Mt,Ƹ%bRbD h jVl%:1|[ܴ`ϏmjDG~NTuu{yO;i=%B=5ewv{yO;iGZiH= V=v{yO_#рҢo/3 zP)M %Y*0"k:m2_5!!m0ک2L3 i'p$0uZk}Bg &FQ] zpA&GO+aɄZfdvlGVH=^KFX 5efgP "VbϽnyKnTL&ٟg-;0x,f .ٱN9~R'\/&?X}>U*qKLsjκn`'i >yK˼djQͺáB%ʠɐO9b\4) #PXP&CJ)0D'avJtƬ$d8ȭypg D[$qBuݍu:w]s}׹\uaHM'~%S)4,E8y%$ F؞)p΄ˆEtIS:$Bx{mwlŁZw&6dߍZ%cmj:xl :6q+4EPɤ/IUӴBt fZ0 )ʞlBlvckL')ȥZrZF`fx-<ܓu5NFVɄ߮YQ GiNFΞmT ΔJC<5^e3ŽEDff.(i &ynhQ"pNP8S3Wj = uvj4v]C!}ZDGca977aRtpμLky7)}Ŏ MH3gRANm aw6C^!Βou'.Nu0+ڋ4bЖ_^\d>[T3R5`)iEFMjk*iAZ?Z5w,A!Hw4}pWVnj -.\TW9QK8#,ϾW r '*'Qw*l~F42"fBYX2O͛?Sǘ#Ao]10p M %yG?Y?O>6 זB+;2Dʌ*'^|>[fr܋{=s1q7"z| +NQŇ4LBUT\"Bw.o4dd4,>w"(i1MȖ /CړsRY-  "0"fH O]';v6>JQg\hW`DBàwwttH7{VZ#᜚M.J'*:XJ>l6>K d5d_9EjQs>g9tn? O +gs|\S~Ffy^j+W*Mji(ܛMOhJ%±fHm6V+ [gg&[om1W# GW#r1À1]qX[jĜFCkmJRlms8PI^D*RһۘZtr~bu'JRD; zMU,yn^=uC;{bq9{rz=gCly85 qEC5Вrq⚊}b_E%j"J(8 sGtwhZ|C_X&DN@,wS2lI6tfe.L- c|* [nB+k&kl5wN SUΏ'E N&§2|Y OPׯd5rӏWy<^?)b啟򵜥$ejR,uwŏJs7;sb&w5q{|M{~XgWM= ::nòe[ag[KlȤt|E^Fn"Qs@ {qa۠o?Syc+#Oe¡4sGͦߔ3Z1ur^ ؍M<58K2e08;IAxBKX`3֍088]aL#jR6Kت2E*0PgTp.6K%8E*Ջ@iÁy.w?wmHwfŷ!3&{Xఁ؈my$;pݒ,;m3qU8U"ŧdӍiG(=GRZn~ 0PL:V,mwYGPh!|݊h-|EH&\3 c(^*z<-LDZBd^bYFy4j}c(AOVT^C if>"-b6L{Afm&"J "ьYXKqc(^sl>* Of2tDVpX7s~ &/7^r2Ϟh10Cz ?B 6_$$¥lPU0,zV~&;KO> DʼdҊ%N](#(:\cp2H24n.ZVRQ2  -wh[ Rq\Vi31C[yD2т rKHMy-c(4 HiX =W9Zx9ԍBxOQ"$Z%4:n&dbQ!)l?дT 5XXPDR&)iHa -Wִ`"-E\hRz |2ª*ע\%Cs~k#9J.p)Ϭx `1Zo-]2]]2]2]2]2]2]271] }HiMdgdI- 88:C$ڇT ׹HT @X0  w\1jǙLL)% 9B 7S%+ez% 1+dĔ¥c(^ٵmKY*!1K`^J  -V,~v4ZG7Zhdɲ% ?B -wC' #q+Py8qN~&+w[__*Ǥ̦'PB kc}͔!H2[%-~1Nh> #(,:-W<|N;xUt_? on$f/=@U$x7J:!mbu/,'f-fQb[IBxh^N+d)A3$ᴬSs~B jo7CC9[Lz-\ 1Z/ m=0+H 'U{TPh!<*z-yVd:I~LW3 Ր#(^GP>il*Ii\dn~ kn[CB#6=#4n>֭Yʧ9?Bi}p%TJDb"E L1GPh!Q: [hW|gn&€ڏBx Q(,RZZF}b:z7$ -w [} hqfD0y4xL^XbB4n M7mDț#R=^2 ܂C@ 4>&2My^0&B֦ -Te -?>*.],{I:^Jњ@ Cֱ^EF ꠄ <H4Bmӱ:ܰ93J3n K[ceZ zD$@c9c(j,sGn A3LYHK Rc(^[۝[[[[[[?C@khM&b.0j ~`֑ScȑU@f>h P \Ggz GdphNM1o\{Z.Gf\ɆA*'@{ w  v\sVq ,$b(CPh pV|K Oه٧?B(YEx CNCs!2ރ]",OKhh '\vR  5/1ԛtS cJhr RM*!ΌʠB+/|CM(G!jΰCD6q6%#8B6~z32#=婱qSm'_ۏT+?#ᤜiFP 2uE\YqBh 중q7yLޅ {1tjSlg ZP؀ow1Gi6oWՔ|2b? B*3kK`dL3"*vf+=ë{}w\uY+oj׼oɫW.虻̭-|h@*y`PJk.RYDL!5 E,Au DS5me9fh66Q0.J;}eX.h]Z醂tX)Υ H.y͍ .lQpV80PkNx|QK@3;RFga%t[vcI-I fD4z&|]>]TZkkuӢ" be4?Ost;M@`.Sl!g1Zt?]xɖL,`:g>Z0??O>,tѧ^Vޫ!zoG;Y#`Z軜Arw-i➟I8ܛ<ǎ--g[%>r ͛UN:(,T=ʧ1oWO9- urr;S%g֏=}~rv3ȋQK点=`l␖<;Rޠy;&Iz 3ү]/fpqc<:-GgEh^wY^Ug-lkFlofU MÔOuJ>==\vͣۄ[ӛ ~{K!UV2W71!aE+Lv1'U&5"{0+Q:W#+ɯn\ty]L,U)Ӊ?Mʄh:;Y ֞RQ5i"'&ɟ7/xys7/Z'^V܃YA (|5/~CӦU޲iiqDkj>]R-FZO`pWpK.>\1nQBeDC IC8,JNSUJ"G v^[Xþv׎8, ϧo)갴zd8'@=o{p8[m_scZ!Qf1e# E)CF"$UF.6vlwkĵ`jX߬lǶN=#YdK&ȿs~#%U.[LZvQ[4fڀvg n-5O;d㼼sX_5Y#PKTh  *8V!&eV~k^4{~2L)&֖$wepB(L8h\ Ȓv/Xbu0_߾&]Η_^=]{Xt+=35#4#.ɉgeΤ%/SaN/) auEZ}x ڕ9@oݸ8>}gUz$eW˧<:;6g%+1EbB?iOFT'e&V}^zn_N^uOK]-;رRWr=}Y'=' eˁZ>uOɋ^M|i}vIwo޽y=͸7!m\N/f+VkʣzmǓ?zeC lݥ>D;?gO$&K4& ۽AZ7:[E~zftu+[;Q{v\=;$f!Xt!@][r= EcM1q;a+ڞv5Fh&-L)_ZlnWf{K3H)0{*7Gd͊V\1{t˖Ym*i>$wF]c$%۫2?\mfRdV& :8Ԓ AZhAGke۝Y\8|TԚ C-4#O~?We[NSdCAa% +)-x$r#$Zvm Ɖ͒' PRr\Ԕ>$bX OC A IB*;2-o.,ܚ}ε#IȧRE@>,6 6 j H(,[EIiPD[%k`DiL ͨw{gFE6eyvvyG|wkKͨ}BJMU6,PRym&5!zp- wO=—2:ɌaJ9 o춸ѹkZ,b/9%|^{ x3hԇzՑ&:f%Uz!l5d6JÛTw̹cafc ޅK@cVYjO;"0Ù{!}tBbu0b#Y:.ޜn6YRX\'=u(s.lbqMOb}R|1>5 fU^܍S5ԺV"ΉTRm5=9eNJ} p%Jѻ)$6)hK si_!'R:a@ziڈ*N5I:'͋Eէ (хk§tp}4"f;xp+ ]+(ɀ$kCvYv91#_f E Uck\4Sk|FnxJ v **6(:٠-wZ 4@shRs+`BI @drVѡº*u%~, Ƞ-],@8BGW򘱘d4Bݬ2vEK05eB <f KMga=gA&PVPB ׎]SPP&V*`I=RƌucSO[5mB7JF =M 8+(f%Ni+u zZE[PB]Q[=ತIA a q2F[Q)b#^c1gGܤ ݄#Bl `L1R"%8TRp 3k'T bXK9(i3XGfұK!\&xACmj((Sѝ Epi7;KPAw/u_mPSQzP]Tj F'U]%A/[ܸ4f^n!GAU3_Mr^KYF%㠄jJcdV.2Db0T{4 {xWP>|p.iҸA7:mb_*"u44U !ؼ($聚bB6!gU8>K{M>m:@GVZLAG=$]I4ZTUFB>L: !'qcGe&5%t_8gP4D.(2ӪAU &deZP16zOC"AɣJ )G[ q;2p /g,:?jPJ4%᫘wV4<ܶe8Tx*B?$n{μ ȓd}E M2f-B&ȈRGu)}юAjy1i"!%:]%|̡cLBP=n;H HjLEQ{pYf}J/.뒄 Xu+P(9PcmЙIMH#{iXZl?c%NڳF& h%PD[vjt뭊# MZ6H*t4/NG>:0A7lmRj6ƛv8FOS9k*>@.  '` ae+J ̀| =ʤ辐fhJnÌ Gk|Tp('=zMW: Fm(6tqHp h .* Ǎ6&TS.*w3,H&bC1 0&Ƃ"Ɉ!:.Xz꤀%aHNkT'3twECPޙGD;!= R GV//bU_xSfDyzz{g}Rs9ytE1r2ͣH$3DMmNl:#ORE !N;oDSP>l8 _<+fk䀠s@1s@1s@1s@1s@1s@1s@1s@1s@1s@1s@1s@}P8 A! D`}9 怘b9 怘b9 怘b9 怘b9 怘b9 怘b9 怘b9 怘b9 怘b9 怘b9 怘b9 怾RHY8 x6s@up@^*4u|3~6?3/|!|6/,\'ñXr'GCn۳3~\ڦ{9n. NRzĐ60fY}$щgˋG!*2lB | h딣mvGnj#9=\Gr3?8v(۞XQИd{ߟhˏG Mhu1Zhu1Zhu1Zhu1Zhu1Zhu1Zhu1Zhu1Zhu1Zhu1Zhu1Zhu_+Zv9u[Zgh;ȳA`m_xxD/ۓbw8}Ա^m;ۤ>a{,/CmIa[M՘e9VStTGD!Ju۞}!w=d^ׯw'p#~8Ya}7,-.OOmv^I% "=n>h9{^)Qz06:y}\_/׋Sر_G"k8z#oݟN.6Z0֬x_^zC8^t>l6q5__^54 ֻHIo$^nf5|njU6u%͹7ogMa'-Mq?Zezv*F7#ߌ|37#ߌ|37#ߌ|37#ߌ|37#ߌ|37#ߌ|37#ߌ|37#ߌ|37#ߌ|37#ߌ|37#ߌ|37#_ }gc=t==m5)wo}.z=E'оt|2䳞W IYnAV^/v1WF&p I@Ca7igzNخ|Nqϧ#n>/V/W7;<:TU v9>AqN%0$[&.?Y@jrX|A?P'}wt BOwÓ7MSv>7 {U?]tzgX}ꘁu0i> ^nFҌB9@SBI!Po+-.ijga젝潫?-M,S17F萓۰,xQ[\R2VgihoMBɦг^ms!^9}iËAẏWW۰xmvq\qRrvEOeFuXqo|wyqyŮ8cl9KNguٝ"8ovi[|k%Ft63jgkmH_0HP~Yٻ l} ܮ E*$GݯzF#r(R0m ~]]W6Sgװ } K#F1ie66{WɺtUַ:VWD䍓e?VHȥWߌAzN. +җۮ0|-:?|~_n\ XiO[IRJd$7UW? GWkDoqWܗUN}[u7nw\Wo~z^߿o~x1Q/|߃. e\:[Iޙ_w"_/~~@ӲYTM״j xvy]^_M9nuW! v0`],|ykHwN^7JS|7J@ELYs)(!IxLHm{ #̾c[fҥ(duIKrI ŷS=uzofIŖ;Jp@B)eTRS8"Ẹ7[WBke2r3wGC cjlj;v4 |6 fH44:U^ɴǴWe0Qb6r&%"@ F/p :UAO}^?QY~;Y AOQ^J3:b0/\T‡IXʔB =+lPkE+̌Rn.4w8gۡuf㣒`}(j8V;34 FA )4VMO{8&g^Q̸ĚX6x4tݶ PH8 S}GIc05!TQ{Z[Ry&AX Abĩ(v>RfNE਍- @&j5(|\F8Qm ?J0U2cc1H:XzFpTYj#2™HOٗO*FHpt*ko+wގhb5Mi8[G68TeaX;(.g oUDғIVTUcOV$*'|*)T1?'dOyB;@MR'c|ĕO?' 1 e 28?'dO?OaC$<ErQ\(ErQ\y!9JsQ\(. @(P. @(P. @(P. @(P. @(P. e01[f1TTS -c.>ćx LqY,C"Ncʠr N7?H]:L!Bk#9oB!gh(E ,8հL!;f YqO gp:oݛhsM+L;%Wt 4|zz\ ‡0)>{M]ԅT(H1A&zGXq4cSe`!n!N,C\]fu %1U|izדaa\¨`lby sR 5(§Y3zkNTѓSQsL(QmGs]Ȱ, Ƹ#aREbD h$Fāt1v#֤̕]>rݹ?=P*?= U&'S&KةTI2~`̳B'amPM1}ي-akO(>4.=2R㉑@)l葧ǖ†rgIJR|<+QI,b!J%0a$J"1zL2lnסa9$LF)m<4ԖiCP q`8-RLp=3-{Ffߙ/s_rb)SXQ# [ +͕AFYҋ|o/}W!AVK`bEB!!Jb2awZPGdTaKHXEdJD $lH%1Q'4VʭUNrk2oNYrPXq N[A3l(aJR3IJƎhBڟc0O=k ښ|-7ӋÜ^jE"X9Hjq*@`O/ӋgxzL&P'#j%!.eq ŕT'$v_hQ\%r:qG/*(Dޮdd8 *Ky\T&1 n&QOiSkeI\"ͼ-H<6;B6~M0y2kqQwL.r,R20}3aH͚PL,<-yF:H*0Ԗt^2AZ:?o r=kel=>Fy^,7ZbAN,^$W2muu e+Igu_mS18K*C1y灬[j`~]*ſ+-pr GFu]j ¸s%&i%үYf"QAOt޾\}'»:]}" aoW#uo@~쓵X.,}o$Ig+t;VGI͖k;~5t6u6l)^&嬭eeN/^:e1*|=>_IMJ"b$$"tJ)0y{lmN{<[7I6]=MZ/W!" R\r|Bh 4l\N&"Qѱoc7g%A i׶<;ņ_e&im,lӖogZ-kIjYVIyY|}H(lY"MŊ6;sc̱b2ڄ5is [NsG"6Jsebڋu6գ#hmܑ\ëu]=CϺ.TGp,AfGFmw4.^V;NaV<;H-"B^*r;2n]+ְa*k3Jx[9yMR8ΝwіPuinq0ٵ璣+Z'%ҷeH N&ɮ AL>O$܋)w[ѧTMAlo&hIHX ZEH=鞲Mu#\PZ4&) =ӥD:]ܚ8ӥTfRdͧsDN="M1AW#XZ\=\,F-eWW4mU"XW\}2 k0>vq$(g(Z.Z zTŻ)+WE-Caou?OUKuߧS2%J3w} ߦ)jg61]Ot7mMo|]3  Iμ±bD3cشy.ly$6A,k0ew`-mξQg_QzB;D3<\Ne's!wrp'УiO\%8qȕTUV Ե,˜{cΖ lk`H[kjhjjhYnljz!<<޺9=:z͠nSv8c%cpdf#M a|zf[a \(w` O=cF@X1b"iZ>~%[ʾx聙ZS7/gn]7RʂROJ}' >b@_ ~WM}57o 鯁D̕SA%*o^@ :\խ 1{xR/Eu׻ˠQ>_8$8LT}v˴wɼM I3z",]Ìx ڽvԗAҺV-mԙxgx(ujwO eNدSz[cR;ĬW逰 9HsE%Kf[vOn:ezܬ+{):Zo<#C -)%roHkcX;@,Rspv۹gu'zѶ_C9DŽRn&x92J#"( -J))$_9(D-"WvimNj1pd'.]Sa/l{j(Ѷ6",r%ϡgLPX:DV<3zae)A~@?&]: m) Bh @s0B+. :P9'rlOȞے4זN'5"Iμ0I2r/Lxb;RM.Z世Z|w?pbZ܀}~^Xn`m Y%5l^Hi$E5^.@DYR! @]tב HitL*"Q* J:J&/u -8?q]U(|iɐ쫿|6 JQQkoy14+ #RL CJY)0Df;((.c 3 J m.Ppgr8!$Bjw!PY7<3BDvX8}:$SEjQL7`9j$Ҩ9-w``Ttv9{UWSvwM+c wKdIކJkx@tS5PZk7 3ev4vyCK~\$"yu`?N,օߢ"YSDרA f<W;ꆾ.f*a[iC(XKV5Wsr{?ͳ>Q0৉R9-|\RLG FٷJYt~~6>_*d4I\48{.K1(~THY|P]NOwө|-uVw~(o|3^%0\)2{}Fۑ?LJK3m? cj-1U͐T6Sg޵5q+K݌K[ 6bL)yKebyq>׎ HMTI(#XF Wܗ[v!NzS8G`](Sf- c4m=Zy=ڒedfsPQi%Ш@hTР#CUPpT____߀}[R"\xҝLfqdVmLf)gmB|6I# swLۥNGJ3*bPY&*nIKC9 2΅H IψO9f1s b9m|L d͆cC/ꔅ8-%6Pi=5! sih.tNh8p8w5Wz\9)%[ x#][f2s.$NFn84|i9P؅d3) 99>Rhѹf1:xI+@x^Y#Le4DCx8[b5$J݁zO_{'};;N"kNIAfbPIE<1zq)PpNAY)kIYʹ[P}õZEAIpC=Z8 arBJҠ蘤VTo[[!ܢV(T܉7L[і:XxpeY$GQs`6> 1Pfh0E7/_}]GӣI/q3KG.H$W(q;76xOM5yo$Z@J8j=!HtB8_\a7|Iq֣8R*_ΛMؚ1~z+W/^oW*Z1b|:ʹ68}ݎ=&Pc\'xS(.Q{7z4||<|]{l0@9\ sf?YnGvQ6x{9*'vp5q3֒`-,-Z:kk7hk3r0ޣp(|882uVi'Zm+nד7 8]!?ϏzOhoPl>]w}ԏ}Axpxv34IV(+ +=Y1z~{k#U5AvOp|×_Gc?^Y=˸k,-K ` 4V]5M̩A'vE]ni[J6ۮ? A< _tm >kck#-nm 7a(y&"BfB9A)jXrc.=JM dPZaTM^鎰R.Zo'^h8HT$vZ'4p"f$*RQ8/`|"TbgǗKo+~QgCg:q@jscD2;DU*TN8S%&s~ Zkwtڥ]Sh )atDJJMd3(x Z5(uОŴL$]~-boKɰͦ z _fej@*>q9w;<.L"UxDؐͦ4cw HWyaJ% ZԮF}zzcfۂ683HkiiKv&9;Sj㘳=TJ3c\p~r?mFvՙTgrLA\}*)1z%|*{ qҁzeZscu" :SeJ}u.V(R }F՚E- f(K,H4*%Ҙk 8)ce 7,ôۻ6x")Kvf?P@ )6>YsUJmCؗ/&upL5y~;JE@? XUqCh[u6%˳HNT$#UYR $:q)$`OW16`+(g 1jM8h-r11L!ݔ>94Sd=qhkU6$UA:|* Xs9+c9'ڠ PF{e Phy`ƶh9D 589h*tk37d LZg}\^ $\x;o׫!MJ#/'??Իܸpw(~-J`{H瑔~z!B& llPY7l\l;&K l!͆;]FٮS$K4sܮ po2&i Nw:}%l=f"iD{BIpC=Z8 K&Vp)JB\@v 2^o#.M2x4"SJAiq)XH :1Z@HŅ%HA\[t=6C&@_ASN$ C)ϼmp}$30 Dk7,/-@,X%$g&4\HP?x|c/@q`IYH.6V}s(bx=ioGe/Iۡ> Wv"J66Ӈ5-￿!9!QGq]U]uu)- _I V65La)`7`6HQ3\.W.>^5?fϟ?*1ZR-&I6iw\lP1~,aC$1QO(1 3J,~^,]OS~ywEXٟz=0w2FFe܁B#Li P`[f4ݬsA v[_%7 V:"3VWu>GAlhב7137λE.>d`rURR (j[9^^wՏR^ wF-Kp:tp̥Ml^q [MNͨ|;)^aF5ZLyHW u8Y Ր`޶+T~q':wSB0UxN'ڹ 3o~*!0shГhĈn_d;VJS|XJ{M{Tw|;P\Ў[Q\=WQ؝]Da!ukUa}Tߋ){Ķ $?7ƿ^w7^`n.~ys5?_(Qu'8%в.o *ʮZìKo95p^+H\( 3Ņlܚu |L:J:JpOL;RWB gEsLomˆŰtT|;T#>喉e'9~PJXUĢypre6&SKbgǓwj[ hPC5Y- ay.7p>YvUSt&ƅdGPl &cjmNrl, ēQR-9K-h#=87ϣ-C 9чFMNQˆ/3봰f+mL#T\y3i Cz6#LK刭8J#6\ -{,~r]ma8d Zg2I( ڇx 20ƢVK1rLLRځS"!& 0B2̙w`6.*q:q?)Eϫ x9Anhm?ra g~/wmӌȧo.7dރZ̈́*Y@Fx˴P "dĞ{M%"vBKEh9iX쫯wTV/%ֱ;δX'v8ddz.d8bMNjJLi)<]2urxY,Dŕ|\Vv|ל^Y,X&'˿%xBS)) ȗ@OfoDJ6*QTP%B*rK`hB "_w d6oa:eGN^ϸuII?/|+}|Uxz]6|L|leb6`OHOw|.&avAPJL\i>]}]F3MqQ n()F]0f`7}R_mN4VedT)xwͧ)R|9U烁W.sL~>Ez;b=%u\ݣwRtڣ1\qK -b:%dQg۝wFv|Bl} W"AeSٔ4%M=fSc6M=fZGgc6Qc5M=fSc6M=fSc6Xsj"AcFt8zLsq%E1HV+? HK6~"H [Z̿;W3^1[#S[(/D[4a̷_OX\/bfpA:*0:J3 L4OfAF7M`űRs,o+۬?·K0.1As2t M +2 ˋ*ld شcmT!%ht7nbz8p tF9lφ4%0g"ǕU5>o/qd5s;Le^Xl8(x,Wú+Ol.Dz)9li(fKlsxWGɓ$h;:.Mi5kSILǭ_\$ERtsCfhvrAXdyieuKd|[z|OwmL<Kn~ W9R=(AGlga0F2`TYF$@ʜ7:)#"`1x!a2& `~@k~|SuUbˮa%< ;`\_bʣsUrC7S6xc[TkOQ^sk!8 R­ XdX+u̙/@ʥ1v nQ/ϝu(N`r}N5g;){>wYFl%?I7MwcH\e0_FP`8ܧJ,ehSg. }S9tV5quܠ,(gKA=P)&UJ\&a`dY?1H&h a-rV`QbCƃ,˔ԚiE"+Pwi Ab0 ,V 6Iz3ajO(Zojm [J5^~82( Wrcx-;O` I& n::VqQAR"Xc'.@De|="s`ZRۙ^ff.t@D O4X&M- 3Â`h%1 ހ xAy'S1_Pp?^YaeMa#adæB%(ʠhfs 0+A> 01 )II5H4(t׳j?ִ>"6 ད\Pbp[O9+~.#h˙"NFVngQrE"baA Rj`eSÄb vV`sZK^5myahڣۍX1~63Q`Ҕj ?8&+-`\?1&я1wx=y#R*Ƞţt^QE, g<,WV1mʘl;$vF~<|d>5Pۚl@ma'[FwI2saұr]Ք\Rih*V6W~sIM=qod+:cȌ7efj ZW~ M5bQX})抗'2C#ɞ0Ub@syf-57lϔGo?ؖ)͵,ȭ%,;!p%*+.X~XclnFx6w&kD߯!\X( d1ؚJф.Z̄+[Af-ľ &ߣDV ׮۵^3$LpW=p5;YkR ]{AV0zA(^s>ѺMbnw$us}kz vuGP_s2ٞMٻ6$W{y~?m`dC`"L Iv߯zfH )>Rt+-i1C9q˃Q5Dc!($$!2j4Պ4sL(QmGs+:2,"`SrGkZ HZH9+ҬBOyd=]b#f@YonπxN/MoѶu `m^v!MrWB>$f~֟dHkpOw:@T>{z͋¥?6v ~0nrw'Q)E4*Gz()e+̀ 9K{?KUeCxY]p$Å1e)+oMZ?jwm ?UL^" 0sC=j}u|MT_w~(,; v:U|`u[<<^Ӎijqmr;] ~0 yNj^ƶqPfT>ZʁQ̾]ֱcyp=F0PQZ9NEm Jz ,ұtv.Lw ]܀R9[w,Y[ibG0(^\hBS"Ig l2 J:]!Y▱+>Oc QAX0taE$XLjAe,oXڥ>MLry0PHx3jg>NLj9+j i"{0 o궯I+tHIv*n!3/V?5ۚG Xᳩw)SOFp < ̌|] gl#܎a>TL({`e0.PWf)ߎz~1J1'!  פ `{ J-6>imw1}g+[wZW>C-.^C_GiN7l2fKkXcu&B27n4`HVP)`ʾBTc9mW埅W,z#,΄~HgB?3ҙ3un:z# 0OK0F!%BbmE&ii c oq2{͏}9`~-ߩT jD|rA?x;\Pڨ–ΛΛΛAΛYS27C 37y37ye8o>`z8l)%/(5NFr_6q65qT'd$bQ;2(2(2(2(;9>^VA g n&-O8-✃PX`qclSv 3RY-\-|]?̞ZOGr[|;,WFfW8>r6 x$ )[X1c2b=6M BZ"q(jItOUYUC-po% -,0^vGEwo> r;{+%Rquma2x㷱fw^ o^.wgo!ݸhAR JXu~eXs = q\]} qa܎JUWoK N'[~E gv Rszosk+7@=3Q_ }͌cNCpB5} S?7__F'B6TvdxQ%?tfaS&yWО@7__\0sOoFcgmI ZrV8Afx= 5ؗAY-t$C3T/Tmelz|/a,ٙ3`=:Mv"⾜Nro'*n@>9´+kWo6E(BB,ZXJL!0"~  =C'e=gv4Se r䂰Zb*a #eM4ĒE*L<)SLR%n09 Q ƑsPF]\).oKyRg G6"#\J q!#ZD!IxTR孝'uQ ޔ[[G˓AKћa+v%0 c 5v` ;|<W(TIET|Hڼ[6[ #ɣ0N<zʪ-j*͹~#_ t0R ab׋s1*#ʯ~.}fSSg`h>kHq٫>KlO"wfp/Y>yY;kHtYl62.氺^KCu,i[eY7gIhtnd1ݧ':م^K3Kt,M_鵜ιDY4k3Iq;֙{, hG]>΃{-~2lT@ TmV&pԈ70 ML3*{u}jo( `{8 lWzr0̫G}0~4^`xGBWc \RQ!b׿VXM#y҅Dƕ3Rڹ ^I]DȕRb9~ƿ7YfK힦KvONҪD '5{d?ucK..w7_]><|f7olooqww>2ľu{&n&SZw_En| `gΉПMݸ?%gz۱A4 ''0;&8B]H▱GI|lQ1Tł9:lʠb?XN ZGK!zƌƁZ1b"iZ+ iίN)#dZ==;Ycdo!3/V? _5ۚHXᳩw1S7rv癩Iq~/ة>90E)Rf. i҄\|>ykN`y]rR #Qhc֘aL0L֚v E<Œt ӂ(aBd k,)PV8dYhL+=w2=zdzHʆ`('N2g5FfhL"2 $8DWfr 8M(rneZGEdKrؖq\$JGƶqF~,%ڥ =ݱW5[-Z,ed"#hJ(QDfI'=QKSO2'@&@'Zr޸k`s*w֮#=kk-C`NB!s^O*%k瀬ߨI5I rˆ^FcD2Y+냉KM-a$EV 1;"epP^LwvhOĝU`󅹝_W4gh?uNcF1ZleF͝QFG]oGW] #Q} 0l:x e OkTDʶ3D")pƀmiUu=)ZcKnQ' _c9&hz~2=Xz5C·X>$ȦnTLֺW1U^Ot=wD({rQ\!|>(FWi{{bww_ t{z)8!vU7Z=]Z"Wy`m]6QWeeJlAMK]do͒J.{Q %tŔ[tANa ޗ ${Wz cXx[ /YYw]o^ٖLCg=-KLLNgO~_x66nkfPk?EФiyv#|kQ#rY-Ě14n\l;̮ ZOvjZ!{ђ4nwH޸?}o#3Ͷ > 5Z!~-n]K inCxållVRM{;=fNS2xytײ.No!':)bur%Kk) P/f#QEk~V9"t UUP5I}--=k2֚Rg*앀A%d`N|zq)PpNAY%t NQ#e# 8Ym\D"F24C4\RlYEjO2gAy&=_֊{J+3]Cg5誑hͲxwid.`0j!VF%D+N8 A!. L;1Id! xQPq,o3%#SJAi4 ID'F+JJD(Qr%@\ILnWy w{unp2i,$q)F8 G⸠2,R &cGaܒښG$1ĵJH,Mh6   \ fŅ'1#' 5+Hڣ޵"艷R&Ќz*=A.p~{ lnAJ9F=Fel.7z2!V`ݹnU{f?|.TMhzP;[|@{.%>8.BƗuI4?ata}qŒn;]wvUjz?,Oh]ceyz>Kd7UK\.PN9Q ƾ)'kI䍣H'.+гk$HϟT$'`\5 Ì:?mv β9 [ /MWRe#r)C͇po9dN:MV/v½ *n *FOsBrRy[xy5>\xVL(d {g5eSvQ6j\U}uf3n$֎bHaQt00QHLż)G1{]/ˣ2rӗ\7wE8_Q.>^VFRjCZ 늩+R걜(~r^F>]_>Ue\sIV(3 ai4U'  T!JNqRcٌnuz~B=I-srpw_ti6ꭅnXyz7=:R0U V?1v6j ,4[B,6St>uƟr޽`cͥytIQw !ȞfaU a '$G$̚/F!2΅H]$2A`b>1eJk665tyCC?%x݁ GeyhW>]WwB{{4)z|t !RvNGk7V& ,D 8` Vk*>RZ!iɁ3k}st89 DEI"9P81)J8iAZu*D)mi/zLyIol Wơq)d|KHNG4Hm/66xqp1u$rTyQT4Adᯑj)BFhK,OBT*ȸpRE(zztt$e&ΌT[ui> Tp┈\IRh< fu9VL "}$V(<Ld=q1ʹJ&;@uF4\utOSڱ\u/r6T^pvd b3*âm$r˖ʺ`MBT_wRfҒhmo|^nJj6_62?g`m!ya|{ 1 e)PeË?< UEξN͂?f9>:Ó.MsI/,x5-é7<8ԹD+m2DP}tG} Qqc @>Is$*#k !s9H+T&bcx 5-"brgd,QsD{x >(ҜmCT!)FR y#*Xap\Sm0Q14"9P֥o!RBhu)OX 1GPԏH֔(V'Q%s -1dTA pC܋J+q,s1)K" ƽ~]#;ne&<b] xݶprjA|Ȯ ^.߆]{T }y>Ijx5t ŅD~MG4b$h)D<4k T:RD&x61uUqMK.@<0w^ɽ OʇєAV?l).dMjBH-,Jh`3BdC fAR-,ZYV6Nٌ򻍯!)7+hOj&~h:]o'G1β^RD. A1TBHI L#xKa]$iAa*nz SK|fu!@<"ۨmBnX:d1)!RQ&IEyP-! k2Jx#er!yhB8-;I$6%YKX B1 ^,4RQ[5}.ǭw'I*m.BڊpVq)0j@mV1oQQkO\'iH+gg側\l Вi"'!*&٤kA3I@cθNV:UTԒp(MABߗi5',PXCM..Y;&;Ǖ}G}ܳ_K wU9s`x (bNqOӖ N5^<_B02$D . i4H.iēL1*4HѤ=*u\TMTh}:.LңvݸZACR&)BH_^h`2sebBW%Z3+a`EmdK-{Q HCṁګ/ j3KZv#hRXWO_pMDz'u6?R-/g0?E#BU۳JYKe8shd\c ZSI?{OƑ_!%ax*:8,%ЧDj9cW=EQGeILOG,&h,:%T8`T)IJd 1bc1g;Jaظtv:ȚW{퓹Mm|n|%>%Kȳ1oAG"$W] Kg`t閘n?AlIv+6 V+RwQS`)"k4+G~ZOワ۩חfpmJQRS=_)^>]],_d4bl\ .`3!KX4wgE0=UΊуjxr3]ku}y|vQ_8]V 0e d̗Qǣjg%GZa>+N/Lq0 q v0Y&>RF*i97F0GW٦-=Um, jZA>"yC! 0a2uiy+G`RNJnR#7崔~|yGKUnNީ'\lQ:~ `&ӫ*.tT ekZ(n~xߏo~go})Ƹ]Sí\uYg'k3Be8/R*j夒峩5.N|{U`[aR/1",DDꥦ0+CʘtgoNs ˅nw6us |f泋jR:$/+Le&PfTetۨQ1,%! j C _+c^8-mq^N}:^WoW{왌2M뾲sa|FV|}kϐD:c R2atޛ`Odc7(orEVGq<+ wSWi?x6l>7Ɠ$W̒]L__y. @^1oro7Mlj;݅tyސxd77%66חmaeiڡ/帐0K盞oo:kaߤgUKOszحVL[ <ԛaQ䟳>÷!/K53fϘǘ+uϡ-|ǔ3Łbco'JZ{0WN:J!#kyd#4KɎN.d\z)FVL;[qn]7UxM1 WzL6-#hS7`T'tͳð21O҆6%OGMՒ Gm'E-3"USqln6j`9Ńi[UiNHtcz34O6<Ecގ'F|*_L-CSƒͱ+g !IG?O@ b"rv <S$կM>=9![G=}Uͫgp)>#l.}W~s<F]1ˏzmQ"T(̀/J` (!\YyAgTxWpWccB ҹ>z8թ]mb[фýER6v]?Wo)A5/(o4;.owfi^T52USAƮ%_8@Nd>]z=|$z?p'.}Kor'}KM7}K|'.=c.}K'.}K'.}K'.}K'.}KơOt]D>ѥOt]DB 'ӓ*aqLh3͌:ӊ8L!r)9IC#v.B'#8)^dݏd'ؙVKnţ;;ə& V^Fn#rτ'VH;RcfN;#a ĝ&LCQ5(K)µzqyqw1tAGڹR[ h-Χ.uo7HJܙ]L8W6\^`7FN6R¬#DHI(b1je#H(8s%Z  PVEۯ5Xn)TD2'TD+!H&W? ʏ%Aជ+LB#}PST?x`磱-|{.9O<?:iY2[`ָ4(Cqm\)T?;];)WcuzibN&X !|e=F"2Fs)(<74EfX@ ^ Xg(85iaǪzty]DY|X|ZOA,}u'fpmJQRS=_)^>]],_d4bx]f C1biΊ 8BV*O;+GKwݯ՝卷E}tY9[ Ô-2_Fy/vgE@hl|za֎C!VHt2|6!2 v0b:'j4۴g PEAm]+ۇW 04#~x0>.TGn!oi)406ߏ 7ޜ6EOGz811QXC̏>3'9|x1fO?R> pȅQ >&+IQ ~ԃ>BR bN9ӝ:(&L\.DGy ?w֕<Đ~: ֒_l -pbL;jlCƴw~ݜz.%jóS>DhVDΤV8R, (HK MW NeN>+𘝎+!P0J, Qh'*57q6k8Ŵ»>w}{{\Sty,fG47%4l~7+DXR$39:q#L{O| 5p18K=]/۬c8y=GHI+}I$}W Lc@W>xVYLt>ї;e]'"YR9ZD Id_lOgkds^B>(e , BJ4zUZ{F!HӔh~$_t BIb@h2+M;Ίl)zFܳ+@wPs3b(ȀVbYGr(t{(ڧO֫F2L |&b9l/ t4èA;b.d+ hJ tl@:vg @׾pVu]A-xpZ{7{̿b 5&h%mtdt.S} ;iAI,W/yw3Pѡ/.>K`%dzɇ_ȰL>_牫~|z.'r}^tH-#NϺ8בwe'ogēW(@].;Hy ZПѹk<7e skw\oG?u~if7;?g,^Sw>3|9̌[?4|э:"62zݮWZEtyquV)'zV+Yaj:Ǵ"umQDd*p 3j_R0+so)qO,J]ܞk}Qz4ySO!uy76]X:[Bه/NfqS<:5hJUz3i;+H멬zwŗ *֔><Gxktl?ЉS,q~~Hkۜ K Zms s:f?l]s0.b(+M!'C9g|CY`9/RoO~ȗHof(/MQwKd=QY2ʁO푻93 VKkmQ:199.9.h )Brmzi~g8t/CV%89 [}QgxzCoR+ Q >G7pdeuAEkQJ(b@zcۂ0A`-7S_lVѕ+s5sS׎yŖ J,?MѪ"]>s ,Ê^nj`UoA[R*!pԞl$M#j 9aB/sU,MԮkM?WEdW 9Kcu/M̸pP +@IrTelM7t]28Sb.ˮE r/(XD^g'2Ep<eЍyNdr~d)+*R9PDAĢB?KUW|jK7gņeví/FȵM.1 Ow$R1t5WoO9l5AYX? JFW.rw盋7|Nŝ\?jK[]Oby.nn:{>=de, ( "'^NJ5V/:z~r|6`kk rTNd\IKJ shKm9^ݟq/ k]f>;=}u+Ĭ٭C2+|zz8"6zF_X諴 JLcQ8'wF;Ӵ.ɌKc}ڃaQ1"OBVŕX<*-{jUJmb<57TXz6Waգu@I)ݰ̕y2\=?K7WU`+\UqQ"s5tsU\Iii.Õ6ݎA̧2boG?,9;{Brbg/L:VoI#ëtvi$o??Guo}]*,O?ȏLJ1U\LWiA LW)hf;4Ӫ#:u,护\U)e3Wߣk}4檊k챘*")\} ]Ҽ46gs|U/?R*"}Kw'.'O[yAYd&Z 'w'PC4BJ%\PgO9 ܔ L*N̵{x]_/a.KDCݎ}0O2-oW':Q3W'?gM!?~Δf]q.B֘EvIFAPSJ4vVdxԖdȟ52xn=>kOZYth]Ã;iܤg%#:>-h}ZGh}^nڰyd# CH>@#>@#>@oh.᭏@#>@#>@#>@#75ckll> BBPj-Z BPj-^$sV#bͨ;~4$.򣡎sU)ؙkÚN>( [dZQ\יA<"\l#mA򩠍RHyhH@gNe4):U`g5gs|^TmЅ {YjgY0 O_>7.͙;3gDx{!dX.iؕ PHڋ, \{%G2VzO ^RWR*)Ad^rFL9! 鐁KL-I`2 ZZF/d- =$xOAV^NٖGFZKz811-YvŪ3opw%&~h} n6\ŐK`cE"pUG(V Q0V)gvV}ЄIٜb!h!O>^T2Ft\/*Y@o˿Rlއ`Л3 Z:vc,Ub[E|d=b1h#2&"ݱ8;fqhא 'HQseWkv;L'Wugu,^UWE(hjP4Wk) L`Kd+R8*0p*#5Z`(3*h2:ڀmT  w 阂;g;Գ:NIq9o{)7S)vE,_ȲBvb2;\ّHqv9Na, }4_!.=xM|1%$px9o˥(y\%Q(͸SS(Zru.mo|{ݫL6f}MBJ~Zmc9Z fetstsz׏ ٞ d@;g+ky zڡ02 &'/^n=DzX3TttYߙ2WPf&L/GKu5tu„♉b4 5(B$Q GԮ`[xw ;]DMC@CSc;scrK.V|jU}{S%_NaG@6c{Kq;1,/t<0Hmm5ӝ;Q1 ݈1Ek4inbHovѸӭ ]h'Yzx_EuYΚ(LoG|]>7M>J> ÜH- cQPHbaȵ !k [,| jNM"Xa23<~-[UYp^[Pc7U15g;Pm:z^|sb^P J;oq>GXu@ GTJ-Ԋ80{!w ;h :{H/cUpe*>;;Z$IμCIg+h$9KGJsL:UrNb(~S Fa:ǬQFYJy>%kY*xg<#JQE2g +AVo#6fۼ pij>t`XZh |gCȍ0QAR"Xc/A/@DYR! ζ$0ÃtCmG[`szGDJNcRYTD+!H&GcB(@0u. TF9$g(N;XiHᅊ11 )ea$2.cs@:9%G0 M?@]qYzK8ю3ICE%9y!gLp)"uHjвaBG1+9F Z$t[u3ԟsTSEUN{}_;[@K>0)_;?`ţxqשS"qϓf\wҶTާ+/0bɜLqwQS`)"R /8xl4uٴ@0! H %VV5W3plH]4%/$u;|aARή]qk*Vb LV OxnN0a;ÅS1bQm DH]U owQwOnFX~[U O'W̓ `/ک1߻w.8?M' mo`0xZ7'kgb|L7!맑x4 ЇQi$wFsnzM[x˳yvf]V`7N 41p1pX$>Ű>}X9(/R{noa)iCo]{Tē+~IV%Qx ~=n{z5C<5TnXL{ޠn.\ Oŏoxc?~D]}O^&ׂlLozԲavjS/HVl..f޿,+W,D45`_tS[Dm6}]qCR}#]8v)&8 |WlWZvﹻ+pNp2M0z2PY ǀR1lW+  7!l uPɔr<"$o"%Z [gs.r`QJ*h@(#zo uU8. åh\IYb)o @2"JFc&iMciў'U9N'SjཀJ rjF۠60%)bY#QO{Mo5vew DTvl_lqύۂ:96`e{MR+R[2E zc,SNV.䫄|:8wpefzJVSvn C!f+/ lp֐_jRz\T§1VGYwySq]~\bS״+oC&/WaPL__@"PGo&W\,׽}p=~9L 3ڌv 3U)-4LZ~t3ZPFZ!h+Uk͚ iZLP4y6Iݒ3w#@mM '7')jԏo[Q~޴|O/{vEm ߙLGq|aE]qZgU^]vXjp:z;~ FO;7 o}h>aUz%TFaGQp<.D ,6l~Y~˲-ޠ͢krT]K K:Ŧނ2/vQuo ?d|V&y1n؈-C7Rx ]Xyk"y4Ť>PBzWO˺a;-lV=y7g =ۘ݇ݴFw>x0a䅞a4nm\n]p~{ e`opl&*oiJP]~uҀkl+yNuy'>>ʫܲmB9Tù.{6VⓁ. #@mלpR|sR@9Oޙizp? ?(.eϸۆ!sR]T D4Ka_Yevgx=9TP :=Lm6][-zg]=mwoh|]ŇQ7U|MQ~޴|f նm%0ZLZQw\9-t45_}? *ኪj~h$ >.EǸ./c`) Œ9K˭.#) XT8~UV"֫͒^CH RQR1qV{tHbaI{8+ }IIKu!x&EluJ)R!)WMEhMۀ%]zU NVF߅SV1=|gs<KLϏ%(B[<yj?Sozw;I6MF9u v0EN93X?pȲ&yXXTca"i| PNdjR$(p=јDe@(Y($$!2hfqץsL(QmGs+:2,"1rbrX+?B"k ץRL9tI+2o󬸝OӅM`a-Vs GK~:%6Rp rA\qpgr8!$:FjiC 9#Dd`1H9CRM B8) X\*xQYPD. 35Z3{ wϥt&r&S+%ٷ:}H;o /.)8WIq}1"z͹l9|x 0E_IAaܹÍ9O@ ޜΫgJ`Vtb ք0KVuXtp>QLw324>d8d%OWȢˋE\&/&k2&9K1[(q:U#7 a]`EOŃ&/`.K `n {W%flmݤSxym3{в8Y:gY6 Y>Kk`CBF&_ٻMV-z"ͺt 7oNÔCoÑKY:8G _JR=77k)}z7viwL<KzI%"O2.7Φm'*FW,U-3׻&/߿zO?o~zë1Q_?ؗIp)Q`Alg,;~n6酆hX#Rmg(92Rb;QSPBh`3"[0bWQzt NȻ*Ehj#R*ʨPPT"ћ)]DbqKhbN6:n{ѧWK#U] AbTL=AΕ-v9W6DPix`,(3xn23Ńe@U5>\Uc㣒`}(j8V $JDQc 4ǁ}_} z;Q|3nܠ~Kx=-ı%uds{aMHsMuZJ.<JJ8 <#H[`1HeAc,6ʌ ;6`#*F*:dZY gǩ0\7{vՄ_U~OQ_WtԌp&R3>A{5Bu5š"IZAeJz7L J@ z4AajOaL)ȵl>r%\ЈtkCX3((dS~YA ˬ48M7%t[S{)3d/*-"н5!baf tP[󏗍v X-܎_mƚ"k/4&gШ%@څ4W6TD$g[XZ`[kQsg.ؐ=2DBT1G/~1?gvcݨmV$K_3*PV`1.do^f7s-V(}+!b2U)P7Q~~LM7BsOT@#2%f_d#g ~5~y.3?mT$g}TY0 #^ *$RC2|3)R>˶a"_>߃dzVSlx2=> MՎ+±H0_Ec!x*.6S &B L֯g6gqUtXyX#eA8GzSa%aczWUtJiUYϔy۶&ovrĻאk¨w}o]ȗa}pN,F:݋}x:%FۓrR#U'sj$ݑKPb9r_#G5fJKz2*{:N-GUW_>)bO{__\VĹ1]v_RGJČsvjJ>< (=|s%ʃ7 bMI\xxb$7)5,{r\Ag4jJABY"Hivn\ 5nMQؤaMJJ^YuJĈ8 齢RJQJA B!vN)XrJsqg=6R*Gb $UH`,<}%n(dJ> 8$52DF)ÍĀ($E뜧۠wN|vC!|wxz(4cC0+( D+";o$hm)cIm#QvDXE|B(O# 9äa "͝;BZr0sOFs]O=՚vKZJXtZkvIh$*R0U<Y`rWL#,8ZIȖyPSͬ3|" φ?L>tʆx\>%nγɅv7՗_޼| ? IgQeY}6^}ٲ^ƒɶR-=tM"y^$/>|O5f$]Sʒ =sڽ(ѳI{Wgya L=M5'th~`f̡qgvɑ2Y%xk`zkz=`1/A[[vLI,f,l+3d2'8]); +&GL盠&gYI&J1T0PQwژE vyj/n^X!5krMRTUܻ)ڇmm=~15fv1 j#@ntQh;rJ3t҇(% Qwj(;RuIszRU e;s%xNEoy6 sQ'ʑ\MgT[xϽY:>5"+hbmdcO:;޳8*\{ V,_,b>EC B8x`}ܲWpN>]ќE]^ f{v\ҷQ.*̊-כzތ{#οtu'߽fn^1{4ֆð_@4ְqE=hW3z+%+-&]ۑҾ~v%Oo3LߛEZ\PN,SrN{=rL+-.C9eW旗0#}lu=bq*8 p w\߆<_{V] @frc|㻮ض-]nkFloͬH ZE=XzWm/]N[lwˇg%xeqRV4||5iodD>>l㝽io+/8i8euvK(ޒ0,SQ4H˯ & .nضΥQ5q8)~O~(??󏟾O\O㧟~?{a,8[Ep//@ x}6MK 42VxEҮ[ZZ/SkB,!mqjSVm\1vH҇Q{RpK0œ4âd8FE"l v^[Π}5׎8,KӪϷ.EU / ɏl)C+,sGт0NNTU̩Z]SKw:_@t5iK\0& 4 $To2Boq"S)MO)5eC놸(u>r0 r㼵 SM@뛠AUh$is^^=6{ {5@5Qp(J "xȸ2$Z(1Vu掠G!YyI0 K$SN%ÐS ݺjPlӭCFo\56WW0, 9EzY.iT{'ޙ;3~] }!%>*!. PHsA|ccEʍdzO g|c|1B'eDk&#d$A!U{'n2@6iI U|9UB7,t!<Nxm7S,wаࣇ'Kq=yg[WM.Ku*1x-57$킖;D~wHQX1ʲ00 (LR"sN }$T.YvȌ$䥣wҸ@h$9s*);HV6Om-qe:hoL"4eKpK҇lP (Rȭ)Yg^fMI1 Uv? tGS_L>KD! 1fEF11ǂQ(9gO&^nZtD敱!r},9A+LF"&f5Dy& y)M |4: J942!@@:$hBt`Iz됎Mp;G~'еw;(Yxں Á' *.>Z>^1u0NRh-9Q#(2ktC17) oFZxғKmHwzNjy}Jc3RZt%.ͦMJwq(6>`tU4z+SW> MZAԿ?ƟW_[6DoGfamɲݳ}_=FcU_+mLRO=:N:Yi]ތW|h3dڨuV@Y(&ƒ{@L[g{ut?t?8 26geL,_()@ǧKbxu}~0д6h^-eywZ7YM逶AZea ;ːq"\}.OEB jH1x` D"б-A^zM5혚5JqW~pek"b|aw|y"9 O'9Q:̘$.2Ҡ B3\j$]Bɽh@8 .}VF ,w"w;荥nJ{Hu"H8BƗrheP#Q7K6L&H͙0:z(՗&?mir=s.K.sf3O%g1,!pj^7)Oe Xj$3R(J  \(E}]|8Sd'W9ܒ/hI2nAƝqSil >T7zII,jI{L 2:J.)8m&+th8'/VQz~WbE ~U.hBONŋI)עj\L/ʹZkL~'؍kQ%ճG>h54ѹ^t6 | 4ֆxYǔ&nf*cosi"3v9IHs[Z,uɈQ ֗G'M`J\6]~ךw@=Ootzٖa j{~ۻcஞzm% ~`|3 mʹSoCjocC{B33oͰ7fX a}3o !G3oͰ7ZͰ_C~zWu,2o2oͰ҈cuH`Bd&rd&jLDd9+̄Se|>gBQ3 yF!(gzQ3}Fϡ-Rﭼ*6[G5INT{>}5(9uy?"(<JFX2t ޘ.#2A ?dm͔!$*8ʝ7\ 3f4j|sWL#,8ZI}@怜Cr'$Ϙuy7sw0֧c- 3=/<ߗvBV᳥#K[%%1Sovһ'nJӇu KnӢ$JђFD!]N;V;v;Z; Ip4FF)I$^X [ŝP1G EέRȰ, Ƹ#aREb?D h:{3r> 腘 teyBzI<%#9eN6z|̦ώL>KCr;+ L  U*\'aVwJ8`1waʽ3ø)a]4W7,cO>%4ѱYp6[sv8;=O'MdFFՊ֨:FHQvo,E;|[>ZMn-Goo2_=ABuXւӋN[i +/#pτ'[<8y'hyY4ǤS $\؄ Aa0 9fRk R5SYД R3%=W +\>yELJXV»?8YZ֮w*u"tS0QAR"Xc/.@DYR! Ξ90I`9^/J׿=:9 HFǤ"9̰4X'ZI%HD0 AP$@29^\y*(T^sd<YX* (  :TWiHᅊ11 e+%` #'_\mmM+#iÜwo$m.Psg-D;$qBHeAV^%C 9#Dd`5H9CRM B8) X\ Ҩǡ + /Ptvj{\cM'Uw?\؛^70pރ?9RL"u f< Koc}H8TƅNS*w02yR"H>y 3y`^/)`_P;ߢ2ESD^;6Ul:}]&*a[iC(X|6ZUx\oaAz>\'> q"c~:?MYfA>Z8nkSv`e_M>պtv`~Uʴhq:v4'?>&s)^qF3S&wfԶ.T>4O~__gpv2z]\6+KleEh_n٥LBFdH?>բaa$^8<1a0u`ZŴBW1{Wͽ|$pg/JGa"!i`ȥo*^Hx' Nl*sS}SwK◛OMrvk>K'3MftgFU1Ukx[]2U[3ޠy\%7ٛߦ߽>_<9&/g~| {7^e$IZpg -ƛ M[a]Y ʸ)/d~\wkx(.6_-tA}>:(xo!#T͝' #!U7=78XMbp">+i<lmf4swa$H(ZAL!"%MaXF0Ө`" ,6Hh'BwÉΗf|ٴj`_ve>z9q:JE^$ k Lj2N%i AYE 62( F刊`p'%3rG/@JW\7}vD=S4QYȳT\hҹg98;DȊO:xM=%HX*U)2g?rC~s?R3IfҊѸ~^\$xέG9hmQ&:0kEދs_kgElAMKKNا߳  drCv'I5nkMCn/U`tY,7{)7XHǞ "k/%%D )XZYh6tR , ~s*<arg$Zby7ISLjq|t~G>r Y2,Y0w'<, dǂbB|W,5/ݘWNJ"Y[jfs>M 4L ";mqIFepY퓖Ϛ1mql\ϼb3j|촦My+Zm=@m#%Iyޡ٠zϻm67 ,MK 7LJL,X=Ez3hW޶ǔR5l=풁&q;`ųy[SinNDtadWr=4.No<ʁYQU>q؜v8 6~W+Dc*&DhQ'$\k0ʰGMq@$АN7xvLj .7OU ;V-Ouj:wv:4^W6/O)9~78Ut\`  p5PKW'*q1#̀oƽ~QT2oېalha]" =-\mS̙2~gpHs}6,X2?GtmWST 8^p\~,-Xd#,|P,b unG'M`JvA9 su {E)fZ#Jpf Q Ѧ=] vpﴨ&qCUǖaYq=f1 s֗/z~_ذ 3h'r:}8Qi#g}8QHއ} 0gjיKx_u[V.ۊٜTHSpt O9=E_z}1V~3&0{/7%#~@j:]r懢ﻚNTJWRW`F]%r>ur# E$ԕU .[[41ŷߝé:ӛWG:GB6y\*P $Xl3GYګ#%RQE{͐=QS°ABYK f:K[ݠ=nz|.z?i)od7@ǜ0M^Gok/\xR{H2ʵu(`E$`@6i ŜV]{o9*B;` 2 Y EanONjvzҙW*}珫HFP'ENyeq*!2PlE }&Ț-*y o nZ]$sYz,Zy o-IUhWSւbTKW9a\ʡNN@9mem!"=n>JZD0 jh8@G2N %D45E$L;1Ikx(j/ Xo3v|ruJ+Isg Z \B˞b6 1Pfh0?RMILwoKl>}UpL0(.<9i,#msB=VQOG1җ' SjĜshTV/~5ڽwSl"gs#xC;8О˱'rGg<(K:|g(ﯳfϫNɓܣ֬i<CgΜ̑|g]R(rv$Ǘp]l8B1x}jMגD 7ZOHt6#PZu\L0BO w Wc15 14ѩYwpeL~-MAJEC+F̠g~F83gLPtb-KT!qJ=C֝q|u|;l>x; ÔkU/u..+8p~͂wKۍ7@Ȳ[:#]/-f|p٭0^Q9?O dc^]_VMnΏQ'\6ҹy|)N8$,p?!J7uǷv"NM-:?}/p5~Z}W'_$c{̗wl/t+\z.1UWL5}<ūq_\"}wGǷwߣ. QΖ`v&ᷝ??ahUjho24װ3Zd\QrKܸ͎C-B4"`2p})I 1VHнJ {g"GpP W<'h8E-KNtLeX)50.U>_0GAkAsw&!D#:sD5$Qyqڃ";G؞2a}\zw99†ͩ sι6Ar+f/ a?zJQ*ĕ Uj]Qm$"*>h΅mm\%cmکv3C2Ls+tU 2 Ԉ jV7G}QkM'6ȖQ Θ 1eJ"ݩtbG(!l3[ UynWu:jWn1| >Zy ɉyB/`Cɤ2v*/2 T6KL!Qk/q+s«W1}r\% & *>(Ҽ=0ؗ,ZY(e!)FshbĞQqMuM`b2*l,-ACù9=0"ALjM<(ÁDN ,Z# ?<&b [0jଯPywL9X]^BO?|rf d{8dF$n(OX VQiS(VP(FF3Km`u4[mMr=)Z42%T[o(- >;Yz.XZbSvE;#Lh[Վ$^kGɎGnGc#+Nn.yέчKMJ2tz.Ny\$Fq*&ͧ.[ >O,7[fF|M[~YL{[~E Ԧ̰"|Z5_2f˘c9˺%|u~`j♙be^~lMW U:RK+@l DҶ%d\.A5OqJcrs.V-&}mq$e4uh_LRӦ< 3]xg1{ĽFTk|f3layI Uoi5oiΰM،ysTpM"Ǯ[1D7]"riIfZ%kayN"Pe'='[U[8yA^I)P3-@EGhgOָhZ@+΃avC-]tYΗ댰]V_P#Xu:\KA_OޒLVTZEL)v:͝6~okjJ(e+iiIJ()U6Jʊ܋4@+к,:j eIAoяõٴ8/Eq$o? (m'¥g+dY4s\mR$4P )K$R{&g rLrO:<ϗSA3k m6(О&#~%G< )xltAd 0b"E4eF1.jdRDKEŠOU U[-Ks}rqνL4hŠ{rT_ln).r ƺ:{8Kn<~~*JǵhWNQ21y@E m%x255|C2*$e%*#ZXT:$NBdC0$R*&R%c1r6[R" me*, e+ `z|tK.ʌu)x *to@by0f60ceƥQb&&10L>&.a9[NkJpXRYﴠ h2y2SKQBu!@q Qqco)DŇ{v|`rt컣n ذQkn];'r%iÙf '=*gvmg]zo\癖à_";`otI{68x[|,<39`[s2IMs./MiUfP#NrkmJJ!vkɩcs6WZ8.'yQ)B_E OЄ#I_ؑ̀q9sp XఆЯcԒm}g8E% EnętMOWUO gD,* oycHIQFGυVID; D锨ăCpZRR!?⾂6\󈧮|gwx{m}i~dF|sPbgݶwb/PoOP0Π^[5ͣmRKW0p]P‚7bZ ,qU$O'=qPB3J./\_:XrQ6-+PbfQ  *"DI4&րq.@N%eL*$ <#:j>1gJ*S)pa RU>&82pnpM/`HrQ C'}}YЧ6h~7VAvwrdLE*+qnQ]v/rQs^'%?|{&YfՙjG#u ol$Pj O/"D)htOh!0Z Oa)?0HHPh(U(4 F(BPh BQ(4 WBhN4 XM:iRAa4M ea)iV0ƟS Oa)?0ƟSJ,KD^mex^G cD^[}(>">ᅗ^DԁHjc栅$!9IJAI DLԁVYo lCy6!@e+W=teqU}&t!.޻B]ȹ 9w!.܅s5POƙ}Tv@%/|WVDUZWDT.j[9 _T`!ie&DVp'ihhБPM&QKo8Eg'* DV& *>(Ҽx A48TJ!)FD9X>(Ḧ:4/ e]~2`)kLHڅhp`2Qf$<Ҝ艶61L-) jrT})zzY+4(fQ)ʺh:ݳbxau 7NLkWNJQSjװ^Ȏf.l\Eo'fE;$+/i(DS%~o*kɎe:~:ݫ;.Ʋ'mƴ,4+8YA[٢HfxMNygwY)fJq`o,\c[0Yg!@'}=}-nN7O<ۤʑI8Ny;rtraeR P.-xIS:a'J QEC%7F!Z@!N# 3Y||^_=3=~["cu$YE`۽˫r c{v:Mo|/6'/rjT{<~9˝y$u|{Cկя!) ;Kf_z«FUsɛW?wv:aWOr4yR\GhA=牫 /?~ۣr~3z1fӪi))\)FuPIE<1h\7zǙON@9mez t NQ#SI(a9j"0 ')cAgla>hG=a]fI©&uT6Z0a&[oO=|ܹw~m,bj圁Bdh '4(iG<#:&)CH/uüw:! :p1lmJf=XCʝ7 m.ۻ0sJlysN!q|v qCͮ5~u6g'^]g+a.̥AYnn{vQm];7@Ⱥ[-]koFѵ,!G̽gppverln}u}V4/ocՐܰ[_Oz^5ҕ7eRۛta?9 [!9$MߘczvzJh4l$9MYp1R-@]mIP*Cc( 2C))Rl(KQ@#U #8%"WR,7*OylB]vUiA :Y}R9T"YO\ D3ɰL"Pe^R'U9LsWkrz5^@#3'k\4@-gAec-oM;mͽA'wul_J!vqOZYGyV_a]xL"U`7*Sp<&P*-:u>-&qnm^zil{Ч^::X:k֔z.h\EB-ͻ50;c$:'-C¡Bdihuq;!Gժor~t=w79 >_ao)3!ȓ^k+]׽mq8<\̲J3<fĻ#U?[<1}*Z:xW:ߚnhk~-Hk?HkΑO6*YjԆ\cRߍȽH*Jhn#Z՗^_>a~~ Ė1(['-],(- G¥F|q=>V1HL451sD Dy Ԧl`K쬈?jĤO_ :YΘ.@mV1oi28ijLpr6z}!DwPRJ"i 2O}zjkRDC䝉L5XS8ϫ-\J^qgةeMgMEosӚ@oVb-,QߩC{ RSpXɺB21yJFd8j8!0yRNR߭eZsD S'zP!p tJ`U Jn)`-cglhv62j UǶP[P>P^Mo#Y"/R}3= u}YCŘ"ʶ 3CxPc$NWtUU]]_}bHټ^ޕ~:*W~}CaGɧѰ~ c5OfFslѸT9KZBP#$&Dmd vz;5a64, wZP@wat< )D%L{(!.ĺD =mCÞ/{u+1)!RþQ&IE9,eEry QFI{ T.*9m؊ ^zM5퐚vv@ZV&jНN8.@,-"t#ÀT攨P`=W8OY\BOYJE{ꧯlJyts#ĥGfMZcvL̕\\` +Xpr2ל us7WߠbD'hU_wnLιnP gHwMo$KϨ`gh]Qm6GM$$"E5E N8S$&rL9u:OtbQvt 6JP %&2` ܙ!ND$4hx )vN>q_Q^װzMzf2ƶ>?2Y!f^<ޕͰbw-Zw-vzjzr)JSimeaG8.Pa4-ຠodX g+HŸNz~(> =^{YN )F 3XY@`ĂH#"Q( SkV?p!(\Xύ$!JʘT&H@xFGELoLTz4\|{ lnAJ9F}.FK2!.>T@˘{b T΍Bwp wqX8L>UQMD[2kjor`JIEZ&«w'CÚ2p KE& ;&g-xFwgY6%pn%ol̥Džתrq^p2MEGq\T_'60qT?ĻLI3J8?jW٢W  q>g~ӟv,Pc7?V\J7P7 {Pߞw*LՅ׷7l0@9 se?i4|Y,7#; 6vZ@Ȫ[-]jn.0Q8yJ>f]>9Nun.Vg\ʾ |xyex3 >W䍫szFzgrCCW*?z,~~=?˚*xVں;fċy/ˎ ̾21\ݧgH{Z*k*qQwq8//7?Sſ~?L_?Ƶ/^>+E0@3xmU[Mc{-[U[u3۴+vЮdPb?\OyfJk}Xb'?Oj޷'y&r~*f*&0s~SR$ A]F{Q_I{?3RTKAA8 ֌42 !)I% $j'l 5}gqz{Ğ4'{mМ|N MI IK`UixLCIz4GuKk{6XB5IdW#>_Mn}"٣s:F?[j@$x(7ףC~z/N0Q\7H9ʅ͠xjf9m^5x_i3+_~;$(H9wX# cC*1&Dyov\]ff)6{{[F|X]ev_ JP $F (Y%T['Q1B ^DKO %p) bTmAL&ʌdXGKyMĴ ?[gg}q}3|%)r_ ?6B_jL/`LA9, 8 `Nqvcq$8黷.r8Φh薃 5.arϒjV]Ko=&RuVezI >P1@C>jR T%Τ Q娀rb(w-NZ~z2QƔNDi&:DQPɍQ@8PGeHH˴bY|BoK[{6Ouks:o|/'/x 9iujT{HW|k:0,D jl0YĚ;ΚN;:tę ԴDEI"9P81!R!V)f K)h+N2$\exbNzc6n2 (g Hp:A 10Ͷ Fg<6ד̝d_Rys0rԇ" " TK4"˓U 2v7L4+ H)p6qfRΨNI% DwDJꔒF[iشl[P9 GbS_9T9HLdm2,I3²ro9ӻcJzHř N*h/8B;|ErZq$XױMYu{w]+olŲmӭ-##Uٖ *[He8Ap)xQڹD҉t}F2?J)Ti8*j(j&}v~v*`4k@|K'\BGA ~wq_wxib5keO2|x{>}{3j7~~m i[h<]l{u$TϾWaCQ[ xʊ܋)4@ )л,:j eE7A7AM 8/,O?(-Lҋq "j58͚j<&*Fi&ruPHY"!69kE-cNkGGI?͜1Et@mV1oi28 8z}_#ۿbef\R);&tF#jO5$w.ƀ7 }[N-g0xRYF(ÈJDd|Is[Wy#8Ÿ@mpJ08Q4LԙN3_վ9wvN^x6: WF,Zܾ!x&gU`ˍUOv5w߿sR׵h*WNQ21y@F m%x259|C2*$RFU !9GtH2$=k 8:`)U1H*TMHZ)Yhag'㡴PuL eO w`d>2*\wUzyUův4>W@;)Uy0f60ceƥ^b&&10L>&B'ely XO^L%!N @U&ϣBd x"^kEb]H"TTvaw̗=8phJTuIRQ8K\p^CQRţq[E3 [q!cZi=^4)6d3U$jV"_uTѻ<;d;LH8/q}}bO_LCށxm._8~= &o~'l"R Ol2z'q1(pOqj[h A,gdp,2p#57;o"pv8q'#>\CDr袭H"[Mӛ^|:VkgTEmh ~U~[A/mh7};h}b:.kA2bj ގߙ3ޣb:|:D<4#^'6 !Ab6=.*%wH,8 Q 5"V|#d>Kq*Ͳ|1L"6E:g}mG/, b+5g5E3Ʀ-}Z\Nupم&R-/g0E#B/J=Bb䟏 px@B=LXZmL(#(h>J`&& ]܇QfO$I"a|1G ;C˪r)bh׍Ϟܬ>OJT&F^p'=׎D9qVpFj"JBƢW4&=vGۨJ̦}%p z )Dgo&qp F[ OB6RI_@ho-sٽ'  rǩ7<8(J QO%ꢣ>q Qqco)Dt*,W>w{nx§u̲af7tpȵŇ3tŹ͂cuy`OY[_-+[unWrj)gqm]wznEaݶ煖t2Y_vsoq M:6MƷ5U^Nmmեڽ:TEo)zytsB OAJՋ>Ep}z߽OJpf{ӯ η$ }z~0/?~_,MZoJx0X4WIsig] ((|ƓF Mh+i$JAEjf4Pӑч֗)J@pZPꠒp DC͎ڠ'Kg>9u<\v)c$p08g5j(Cd($|.(CS0$@Jqgl8{  /C2\:ۊvKˇX*_m}499Z-7S] әwM$anH&3pZAL8Jo4(䇄iG<#:&)`!+:>(v{Krm2xddJ <(x`!i\hEB!J.2(o(cub]+)/{̦ѨGȺGʤ' HNJ)ϙRAH7T@SjTi+>"%UBˌ  W<~V1D0bGn!cELoLTz$\ |{ J܂sQneNs ^^0 uu-cϿX5P}h872?:pQ (|\?Nl0iϩ1V;?Fs_dC9Wற}Syb`6x@A-ūO[&8"[ rቒ7ڲgntuXΐ Ƒ)wZhVQ \0^(V+sp.W6|⸊T~Ώb8^ԏ>ǫHIJoOM-zyThň٨<4N2AM.WkQS7fԴm~To>T/6- gKZT?4+.HUNx8~ I#X>٪aa]9QhxR(\Oƣv1gٺ-QY?j;U++cl1$,u2W䋫#zHfriz*MTriߎasīvuuo̮)I}]Ah25LjK J<8ׯ~x~H~xݻ˿?~{~pw)"/,h~}=V] Cs [ U;uی+qzk\g"!%⳾6+±|QX#m,JZO{UIebNP T,9c.K{(y+5%zSZRTe-AxW&Tfx5*-LBFJuRI.kI"B{ '@:qvq2l [ҰxG}'¸ý {'RyIg'%bUi^|ulHZyF Qx\:,irDCjj5 G#ݖnYeggQfp6G.ZWЗ2'g {y#Is(:X8ۭFYd"d8K#ME$/4 %aaw۰G۰!Ñ1pG J6($YkX@UB1@$̔/>{L'cB7(>p0k$ò7O-z}9x.}zS ߛG/b>IYLS7wyS]sO,P& >(ҼuZ7zHQX>gap\Sm@4K xiPe]TdR &N &dB)<;hk1sY_O'\ځ(|">:/ Nf_k\B7͉>bvZY!ftڕtDíE,R U&%EDU Gq8%ʵU!tP Y#TcFxI.Rg:EKF&*yb;ٌKWǹt`Z7bZjr.cZ5g,9ӊvYN. 5*c9=`x q 鴥e.|;#r{Wt6I& <_jΑLga0v EAv[G ~XΔ~ݯTT9uyhn6s֫矇:լvGKtGwhVYe=xxC$.}b"F 5A)gRXXrT@ dZϺ{ eҼ˗%atg݊wn22k&5\1e//+%#fχ4CW3#VOkw~8Q囌iőޖ{uv2+ԽXjbmq.h\.EQEe)2mʷ ϼbw˫rBbuW1L{PaWk jIq3v}l\nfʹ=k2@M m Դ|p6tmvR"ǑZ3Pˎ.}vu 餗*}5+4<9Zxqg (/L*!oGl_^wBrkPƔŘ&J]oHWxC@&6  xD+vd}L=zb iM6@uF4P)XڂcJzXQ؋3CHT^pvd jW <9thp},1-!8'GUh8*q-WF,Vw| C5(*~=kv,1%\xd4O3 J9I}(J!$ 'zP! :%*I% AidlL5떲`ac3cS,T clp A士 0>Y~u>q__PnsĎ " Zp'$xy@`, 񤬑B@JIblS`7dFs \z-!(fJhmc"^6%jv^1.nu-mzQxEF \ϣBx"^"".$G{7*kأ|[R:v[njHhE\IRQ5D%U+ΜRSgqܙ{챤|,^3 /~|Q|ȗ| ykvE/G lߺQ>ߞ] ĕ wo'l[@^FCmxn/zE;y-?L`܍0E6]'//%o~2>kLjrq35׿+ꄎW[T$]5]x uj0r<(/k`J ev& ಛ+޹2J$F (̊hXѺZ~-\PYÍ`i!mJ2de¬2?DcW2H.1&tJ4d~Ga6s#iSO ^V5}w 7%Ϳɷ٧o|Ĉyq>5 HMTI(#XFUFwHwFƭ2-p8GSbWY` WY']WYZPJMx Wf+X*z0pP UD;\e)i{+k =GW E [zR( \=PJ# 8Yk[d֜,0ȃ,C,MUR!\1̶Y_8. T.[̫Bpꀕ "N?!cb՗.;$PP88RD2B7޼݆pvH0V``:š4J D;Lg)g,LsAUX*}(p5{o!\ * UאC+0p-\=O҆{cc/?ӏxr. ͸N%+g@'>O^^01H("@qK鴥Sq?sj6Nn9ycGϨ!GD)1F+EZ9y#>y\DҨ)х0#Ys]}N9}3ۑj-=3 "HEu!1Ne" VH‰O# xב&zQ6axȿx̓OU_{r(f–^>$%KyPI@l- 6xIҙq|hɳ"8EWB±@=w™21@3n#8t~ '&kIF J=>XDAs&@)Q P)x^vy mG[8㾃dlhuy; ᳾ֿ2K_ f>_:/I6vcY郦"}B8m1(6$/qPB'uA FLk^%tֱ$'-ȟ* ܂r eCr %h豨 e FRE$hјZ6[R8scc I2&$xZĜSÔTӓ5&wQhrPrP,l҆Ck/b\ӥ",gGrUwx[ol 1d9Tե.W6ڢhwLy&2we"{ /w?1qfsz2IBr$ډO) x(42(*-v4RPG3.5.'ZZ yj 6&j*8}b]\NL{n=|)XLDb^\|fwy'β=v1 )R'J3^%)SI\V*ԈhQ"kz/{>V*3WԣAm3qQ 2i,$䤔ڠ~$*c) @"鿓?v?Ҵ:"%UB  +V1D0bUfig11EO2fS@Dd܂sʑ7P{n,nZ|ݔ hG wz ⪗w^Y/U( xK?S(0,]{qaruI]mwCzS8 s wIerŽ.љ0^p0xq%pn%ol$F:pXu/0BO !:pQUʏoC -zKGu)^q/aϾ5_E''7CʝV8uG~o9b % ?p8%$NoT>Z4/kgMīYr6G\د^vgg6Hp~Umoڵ{ ,z` b.=-~Lvqcf{T{ s/敼^O:zxEgq3cz,ꤑ77;q!>j蕊|Lhѯޔ4e3ś궜,~t>~ݿ.^pނAzW.W JMӅ\OO|WzߟRO_p-(FPgzH_!u<"/}4N,0hiqM[^b")LEhYb%+#2"ÜM"@??利Uq\S/in!K2+KEFq ؖs>ڒ߰6NiKe'v6.$D2rP1#S<'h7E-KNtLeX)^0.U=t W @ms72 !)I% $j'l 5՝9יwkܱb{b@4,QHٛpr>xNvCQ;kQ;`UsŤϲ[zJOT ++"@r Em+'AQY@~jy; wlHZyF Qx\:*ijDCjj5 G#(NY˵pAbpfKѨygLD5q!wЪ;ч>LfcfpW =2B$pZ?OX3k6{aR郲h< 7? ljy (kq譔nYE?hO:l_ 1fǘc5˺-|ʌ1+3bo'꽼lCW U>Ras'm yxENʵIʭ;Io@#.ׁ~4/wKC٘4 m>wPԨjJd=W\n 38 0I~د\8/ִ5oEŞp6w̱6ѓY=ӎ긙{,TZR֙C_uhz[}9Pn EM1o!IvW%( eL܌9Qht( Y 2$i$GEz.&xLa >5ߦ"ƉEVCJY='Ű ljn؝+f)]fu2$ƈk~U oPg`(Y@8a Vk*^3oygںt֢ p&(4I$:G &ZJ8D#*ŬSAa)m9$OWIol WƁL!T#z|h-l N1/E~0:q?S`T !iȂ?#R*<m$(Dl97%8="`gFY-:4x QB@tG qJDN)Y7*O>-ޟH "}$V(<܈1>$zb iM5@uFhTW]v`^;&R({qgh[6 8qV 9xE<6BAu|ڀ6;:w܁k-.*!>⾟,@zN@$(H9wX#+čA6`Q+qb} Sc|ʟK.|==J[ʶCZWBZ f*fZY I%x]eEERT |VThFlOgC0So`68 Z㫣!bvmVLgr=b1J&ijb (w[ %Q.o[R9&ko:F<~ۈ\)ꀠjyhOܝMLpr6zV1e1QL"i  2O}|ώ1!)E/9Y4~Qq֫>8jKs}rQ~&JXv'➜։M[OTGXO[{KRYTkv,锛%T@V2'PQÇ8D.RNR_V2JR9E%C┡^{-TH1N AR-lB!UZ2#0(*8U²Pv)Ӈ.ޒ3X'?(&.ݜ9% Òq]lw*ua;!؝3:"LcD BLGAʼnE aM$XT&!N#ȐZўG&ED{Fb]H"FuTtb엇Q?ς-8U"t%єֈppyL%U~}TZf7\km'̴LL~ z7_0yEbc׋ ,|p7P/HQ.{.uƫ,~?]|bD VhD,3,?FԠS&:~+;VYZmL(#&ic>eKm`&& 9ÇQf+]7q>nQfpXUB8?yWwE/Iϵ.+}.9PF&QFhp}ǼE16[ĻԜ^YM]5jKk?eVPcf1WrPC/PjTpu']I!D\ M Ϫ^wۮВ*EUE*wzÃ}.Q;OHxCy(q+sT!6wI=>uï{flب=n.wLNfm3uս@hǾz}owyC<.؎[F{mw=w~\%K#wh~6=P {67s`p\OU羽ؚχZy{SC7[cC'[\Prlp JdPL6*** xH`U&XW\"2]\!N\}7;B 닫8r8j:JM%vWO)nH\!@͈L."2 .2Bu-+ {+Gi~K>h:; 7{̷^} L^=LaLjM˨ ܐECm&[Z.h3 :A-\,(>.?zGqV#e\V؟4b#zۆ;Bڈ@sx;7/l/0uz&v)i\ćaxcv@4a<9^i1'A/)S?jjTZ8AS,?_,Lg]ld Os,:q *{Ms6pw* _?tPֳwZN9;Vح*Egdн/ux"Eȍ'H!Up_5SQ!dKRtVz`e"ahe5NUS.@xm]bkQb+pw=)\P(QQhHt#@L NZV)f K)h-6 Db $,P9+Pt&>NG4H-P &hOKE ݺ3%}+ fiK@a͛F/-n}H޷{a#?$9MYpgZJ˓U 2.U/+D:υfB)jcA[Fɹ2D&B24CbtN GȵT[ui> p┈\IR6E=͏y؛ln$/VFjٻ-ɖd$˔՚4YdWU,V~4Ul+TKچP5 c GbSTEǜf*,@'.VdXTYg^GUU{뒌9+c"9Cd*h/8B;|ErZq$X栗4G~琵mz[ mS;Am]oS1 ׏g <\Om$HuϫOߧߛ77._Fe_l{u +1)\Vd܋4@+к,:j eA\s^Tl|~ `7Ӟs; JY?-L,F~} ]Y1vE#c5)M{XBYM lI(<(`)(QRţ\[E %-m%KEK `8J'@3QF4?C~k~wǕئ+6ǡjH> U[R RijeF:եd zvL[[s&T'뭟Vӯ79BJ+W3[IJŸa1t@g+f-6&д1 %P0DN|ÇQfa7U!r8Rnqk&vb% zV9}CnLxacKUcot)]BH0:z.|BI <XC#w&@()Q P)1ZڇI㾁qf]m*2jyWb˶zrC3A=@AeY,s[]KV0.(aY1yY*𻓞8f\?*tp2lH1Z.Wj͢3%T$DIHEiLZ,%e 빱1$SIIψO9_LIEx*RZKGSkΉW5rL>.e6ph&C3AG4JͰMwmd[Pe*`VW.PAH^و{=9Q(x(\(P(D(^zQ"3S$ ɑHR JJg8h'b<` "$]Q(he&P.[h.PPġ--Xi#1H Cb쇂 %xd> g=cnSsYQ~$nxƾ0V>]0s?*y*#ǫKABk5^JДwRJ*4ye0ѺgB1ZO e; 鎢ﴐtҙB;T9sTSւbTK%B%+O;|r" i#(+2t bF  # 8YpseLeh (2s`6${q+2%C_ᖂ+?Oxʷ||XP!v GMn[^Et<j0zPo1pZAL8R2'"aψI #s+T~PgOƶt;V\ϔL) F+JJD(Qr%@qodž2ҍ8ĞNF^XׄM+TIsg$ 9<#q|2,R $g4=t>"%{  ` P\x"c1r RYA:4)z⭔ 4Jj /qOA-H)=qՈ9ωɣz{i &1v6٭&k_Ws# Ga8 s'M˾8L~mvጧu?eqﭏ]>ԃkgIuI(!/Oly6׻~~R ߀y6Eyvᯍ'?6.> ζ31w87eeW$h=̚xOv8#6`#>]0}E[YeP CO+W1i˘ûlV樌u9ɶQ[*ٓ>L}F|M`2wo픽LCon9y߾? k<-k5~$sxa;dz78~k: JOq8?- ן>O??~>S?O{ e}#o% ooZ[50ʮZ`֋?d\Qr-yc\VwQ!q0-_1vH{3sI ;3"sP1'P<'h7E-KNtLeX)>T.U#=_tX* @msϵhdBD3RjJ8pH\3O)(k0>?)rS}Wp#pQ(y=;Y*FlL:GlU}yBwX. ~A'`AԞ\JL]=Ԥ=;=L :*]c_ q3+Qv++"@r Em+'AQyS_/>q6$<̄(< .AδLAGB5DpG-8X6ɪ.lPfλ̵7zW[t m,~Kg,ws썹3sP\a\/ g7(쏌% aiD Rf!U M6SDWWy@@' Gb:1lG($cYkX@UBdI"I*KS${ jjMCw 0ÿȰC⣡ǯ[/}rlrw|L4'pwH|bg'* DVhJXTi{I}ObACc{F 5 ĤdT XZKu(dR &N &dHfEy9ώ'DLaYc~:[(wqAi7[9к;ч>nmV |cݖB#%)y!I4:9hebh2-)"E=Ҥ()Q {CQ3hgRi6r:᭶&H]Ҟ-E[-,/[q7 ǹ~Rj ~~v՗UgM1nW,Ki ,xPs9챀ib0X(v&DS K{vV7ma'v:..AUE^uIm{MMn9 v\lq S=waKAyW>1M(C>VwPNk{fzOmN7% -$?Vn8^&kjYEEcOg&&sTJ=g]7]qdv ܎6]wB:QXX7׻7CY{ʅ?q#O W"x`}X ֣OYD*k'>/)5H;fI{ytW:5ӧ-ޕnpo>\H# Ylc$1ц pz-p9l&Z&CծtЕetS[}q@v\ѷ6n7m\] rdܱcG2KFs#ڑ=2c_Xݼl- 845 ~MO6:MOBJϐ9dCt>ՄV hjt J\M;ގpfD^]MQs+GS|sꭸ9Y_=^loN1\}vtoT\)}'.cvn{l$c* V9ɟegvƌIuRl\sh'zX-12(,|3~Onl'Z{?'JqJϐ= 8l&ܸZ7>]MV고>zMwo*w{;ӌ}|}ϓy`Oi޴Bå+{K(Knb|#%WHrz딀ͻ7hqhOo<'4nBbh#@?<ϧu)pbه pqr=Hr$wcFJX-F=Bnvv=n@#.$$W/s%[JG@֩ X2|,ER5D > J鼹檪L/Cm>s3fjm38E3ɍD%[C5u}g)5pCPY:[ՐR@i,R!JX]sD@K"X'#VYr1k݄,GO#-U2X93<H #KE Ȏў;E?udVh o'0L&˂Bф ix ltⓀ)+ <=J: oN#4o h;}YqDg%A,7Xu6A2Ɛ:±e60|-K`uU6 C+X Ss݂0^+*jK]\`0$ƬG6lmbCcPc.roPPCo*!yd[X^baMp wErw$F*M̆eBGkl}+Қ`AV 껡L ſj&6n|#rgd,qdWbGX;PfH57]Ґ?;1m1ߺ DaHke Ȅv%HCPTcԭƒ]5 .0 ֎CC\DjSK5J u$J!J@4`~\7 d&N< pX +T gq@Q"Ł.̑A(YudwB"=dH_(huT `+_C0q3׽( E><4Q+XlHH>0!5P]`3d,["1ʬۃArZQ8b!z_4yYȠΌ>tO5wX4 IHN 7+ z0uJ;,'"B\Y4`7ߩUzx}&!͈ ǀ 1=*&KP dw%s$%@ۈAhͪ2AS4<#y䄰 _*Xqὠ ~w68Y:u kv>k158τ5rFh b PO3Fry6\}Ӫ2#lS8vâDy ؓZN!F EtU%\asE;XA L`THςO(ZR֡[7bM6Ձ'XW,'irJmNBNC_&o Q0bp\ kgYTQc$d)]Y*#x, A l5?<۔6L\Pc@srpBs+YVj Cjփ*HLJF}>3遲XT;*6 Tke@\rLރ P|s iF]!@6(# @PkxQJ bdzls!~Z)AHen_Q%#pZ{Z7Q!,6:a&łn-3Ea2bt~՘-2d4  kȎ,CM:[jŋPpeBk' \5w}Ve>]aD;i0BI>zccV /gf+n9?XBeEN]y!n|y>KN+r2Sfj~gFVis :6tz#Ӗ|@Y67Q܌ȋ|> /(}@[H}@R> H}@R> H}@R> H}@R> H}@R> H}@RG[ whKwY߹rA}@4H}@R> H}@R> H}@R> H}@R> H}@R> H}@R\}@)6dV|@-CM^=K꜠> H}@R> H}@R> H}@R> H}@R> H}@R> H}@Rг +m;> 6/liXR39> H}@R> H}@R> H}@R> H}@R> H}@R> H}@z.>lӽ諗gs~{kv]{oWϰ*ڒm ӆĴ2m (%m9ؖ>zH8hCtPu צD+jtm9UP!#m&D[+%:JWOz~AZl)=1]= .?q10â+~]ca7DW,m&\[ݡDs+v݊[3~q퇣xu(aѷ쟂\]CiܱcG2so>z}~y҆<ܼv`&ǜt_7V7' 5p?g_{c3,Y{z,(^o_<|Ԫ/.O7oW;1dWs1ȓ_fi Wٻ6rdWmއ`d& x;AFjɉg1٭:V,O q&dU+XE^d}Lպ SǁʻKXuEd%}|;rxj1Rn%*ђ-94E DW)kJ]%r~**Qɡ+!yTWߠJazS:COF]?u*F]]z!uE2ش]޳)W?|7mv]^dR5T(dXJ"To ɤI |9.Z+OqjbCE@Ki"H/ECPvϼLC@O~nƃPgwoz?~Wa:Ljmvj$=s>;y_h _a4*uHw% 67n1%k'Xep;7~:xM=E2 /,V)V|h2F*,cH4 4JAPOC:C +䬂9e! !h;iϓ4I}ut'BOp%*7LD_o7 lxIXz: f&m?lbzCTFs]hh/eu]gҘ,M68wlM L?\\3x˪dXʹTEߛ.l:߃o"@$2=Qxt5ԓLʈwSjb.zggNr4I)p$IvN`g)qtg% \Wx'LILVTitz&DNgʨ| Q?*+]fJ~Flrft9yPbVQ| OiNޚոy%Qv,)0af9H^ݟHN=KOW}kgْi aWtE5qK5&a6EB "GRDQc%(\r/8[#spqcf(`lg|*tQXu) (ʨ5oymJS eA>" cbRJIu T'(;aN%G0 ?rԾpOAFJ[{!Ptf)cпnw͈agRp^"a@ REԀaBG1܀`NR TzJ)y~8ڊ]%FX/ODw{v ~sjSM;d )pZ2 ZyzKZukܧT(G1.|wvo՟ 7y{>If̓pbb քKV5Wcr H}=?&> ݼ0{_&-r)ܵnLV%(>VחulQ!ӠgNy6) j1}J8R;ZqKH@jGX~h.ݪBkަz:x1\.blL̵NY'#;Ai8w/Lw<}Z'K[b|qK˚!˛q,lfUށe#F1nn4u{sSs:*A[urYK *p2yep~IHC^?K^`SJ\V)t>\-Z? E8X)DTFC e>e|>i"ntUZuݭ^\ /|ׯҿ^|?x#&^||yR$@ieSMC{b4=6z7i7%~?nJS2i]B efZN+mY,j'۵၊@ )()(!IxLHmB{ #v}>I_{a\,=ҦH!.Wr< u O*5-oߌXwۃ(EhJa0#R*ʨ,ŋNq(b ܺMX,sFu0ȾlnO5wϨ[-t@S=R*!V#APy8&mBX^Cnx ! θ_(xKq@+1v痃؀):6J)Ma20D3Ńb?UU'\Clc֞N9gCoŞb;! EsnGG%DQp*AЈ^6 f5i  QD1jcǨ 逬wܺXMV@6Lvz"X0(gNRSvegg\0΍fl]@)X|K=KPtxDL7|Hb'Y\ @ " n + x㙕!~ ;lgyBY 5wm,1Eu~7ܛpD]y LZ(DX(Ҏ'?p~ Ii ecCAGqKVr)9߸0#'4 O&&6?#gCǐ5.drwH?|}voDvͿ>Nt0UU^'ݹj ݮwҜ@AӔ9豖$ 2gL8*0p*#oTƠ5J5&PfTetۨQ1 45 T!Ӱ7Fz ^rqݓozd%6E(WGq,KA[Վ$tюܓ)܎lG%C=j y$NqӻL qf10}T|km韑NuD(㲑+@>,6nZ'.6 OP}cRUاӶA@ f`Tn|MSry^X8n|ꔲ۬g)]v`"{&CD)RRA(Ŝ(aHr |sXf6 7--X]Mۢn`EKӽ-R~ۇ(#%sd('sQB# Q(P Dمi Koz5줥tQ[-M]O L M$uUOaprnV\Ϝ/ʻr}s1Iub#ZS^홄z>^AQnWź:g쯧#X閗 ̶3K.iTef[טRʊMTHcܭhh *fbhCW=r54/V/<7P*'ކ6N#E"1)&DhQ  I,,֒!daa8IZuߛydr6sLpCo<0b䴫[#w l:x]||^]՞) # E{{ hނ.J훗;Q˩/#ᝨS)aȔ VZ:rV&"g| wy}dه$(:dMIh]t+|yjyy<,FNnַ` ʺv~~察g(Ѫ*?߼ͩG &jUeo~94кM2ՇUV/eN[ް?&߻mL@Ҫ ɕ=/ ϖb`HPDZ&-'a$J"1zLh:ECI*Nk*e;3S q`8-RLp7Yo;:mww{Yxdy4!@qdx a\dh^l8o<}rÑ!a&R`JuV1"%FV@P1F2anUYjB 8. åh\JeUD-R 8dG(>q7kR`:>L'S1^ (uh4Æ$^0C,;X?TUdQhkvkfN/8-o. qە4PUX0(xM V0#*(`Fpc,Ve;F z`:J6#,dW X.Q:~=2{KC f-s T.OК5ް㳎5>_nkҒV;i55 rւҿl꫹]hcE6lC"m!qܷ x1xKb'?N_'coռ)5Τ#!-vY0ZM'5˂[; (=*sJ"TuH f(iw]8A?<2to zZ`^tV x$ )[X1c2&Z,%0r+GI_]i':~6.TPB$ `xe$qKZ RJ #r氏Ik#M5fHjXPD9N]`Fdɳ[)rM5n".9(>,nɚ|@psw(+㓧~;TI}yzVߛdj*^5刵\lT+?t5ZYDJѢ 8, 9 35% k +@E ՌB})n`P>RL@Gm Xr^Ȍ ^`A ƜJi9˭D) ㌇Bٰ.G](a^r;zc?/JWgZ@_Y{[(A[anEv/iEn;Xj(G`@0ME2X9jXgcn`XEd.c ԬEdDz&Ved8ڥYIX.)،WA`Dl?DD%F `6j[J[DǸebD(NnH_ !9J31EBlDI+ 8&^Ejӌ}ae!:="r(G8hxW`-S a{վ&.V]ztWZ5{_ДH|2jtws}u^ZTbNH+vD3#DLJ3z}1󽔧jD`"c55j>$4~=cakRrB A/oן[N C *tV ٛ4Bg@)%y 4uϞ~ccª$\J0OhZ3>,ENRy7I2^ (bamW`N8AY_iŋ۩ԇ宥Ysf%H%`fJ[ƺb%V*5yp@#6azJ6l8k W\ߵ0e>t~Jԁ#- 4 !z(YcL}R:6H!+6\[^»wΑȍ͋W꺫P:pvd63^ɭyEwۻv5(ϻumWݴ_|߸Ee+-7 -n/;-/.5UJ_$w[s9e]Ѯ/kjkzCUA7뽼zm_K7wW 6TZVUj _һﰪ_7e\%xtXd_-;4QB4} ,d]3tYi&(STtG7ˍy[pI jg<SbVw8G4S#zbW'f]8uם>.94y)Dt!N;ZQ7򅅗򅅗.si^2R\z+VX'e. ^G61R@CN]q-mѥ %HFr (隑ҝNiĻ̫y6Nx\k\˹-=}畠Ds E|PiA)F9N]1u:r!w9sy <Ȑ%22*Tw:+R"R6zM.2jR{SlĚ";`Y$ #1{C+0)t&Eh%rPZ'jEv(CB{fOZ"Rx ox/G_y-&s.1$jQ/} bT:F t @R,FAJ`.-=:)sDقnq_ˏH7OT~}y~>2feRcAA:iQsI 5]zBmNR-s ;nWOSwKM;#Vj,.`D?v)*<0JAԖIor$(X[4*IA0L_3ʆ7HdITgʘSor7=/+>GvrF8N5N寽<:,9ϲ@j@և?KcIou)́'/Rcxѣl3Wt,#:ǧ:Y矎g7,/ϖ9h[|1Z8~}'5/Ȅf/nuzRNGhmT'򧣫Ћ*qi_/.suX1'7cڥˉceqnb߷|VDToͫy?f͟?lŻUp1Z#"6|e߯슁7ۛޟY-yQ n$֎#]Fɵìn S^2Wb=/矖 =sr14wxT%6QΕE˛UK~^rH5ծog,Qcyt97?٩J't'on~9OռQMz[p5AKUXBWpz<$R o+o7ATPk]ɬNHZǬEF@EHSR&ABx^gonRٞس8h\p_snwﵳT^hk X53E/pdKWk5wZ ŌV&;{c+>RAGJ*! Th˄e@uZyr+e8 (BC>>nkCVk :>d_<.zQ^)n8AG(l}WVTl"/ ,]&tdPҁfU2$#:P x  a[a["/3b l02RB*-]a & U8ch%t@B aᬫ7/ >}t>j!!+AL)-hz茏E{;ܣ)%~>bc:Jw:Pr C-pcxP') *2Ƣr枍w&J 0&M>r6$zFڅXOUd$6Ecʡ@rE*oU*,k B1lFΎ 1Pwu}ۧoO>O'D|9~ۍK;S+МH 7g6+T^԰B]1"{xJ^9*&3:ճ,hI2"K),0og}!e,H KiT,fB"4fl'q7ǵ~RYwXgS鞧uy-zŨYf7Tڧ|3 KTr)yːKT*}~}4U/?NW @GJ|l`9yH"u iRk/S/m6]}ĥz'zMJ537}9e&>z bҔ$2H kҤPba #BZF]5Ve3xt.jXՒXQ̥-TNf'%ר9P5A1E4:'&-vHkF 1Q<-wN:WqHvKݭ)mXeMn1IM?cW\OR}ょOg谖VP$뤗F8s LH6 ϣi#x.o(z֊Q|s'm0 mJt$IbM[NP6 c*)"DWo1D/ sAXQ띢 .,4XR: m 7q S{f}-p`hQ_Zkv@nX+lZF_vY-v7ҕl!Ҭ t^ ( @G "0Vb> rPCp,Xrv.>+vd}vL zb iMg*(P5ö_ ˁ'Z C{=.`W Gu5_a%VL" 5`l7&sbqa} f./BlwO ~;EAcտwoњ,7\|~wbq>uX4Rt5D&@) 4VHhG$$<(`) kyQRţ-i[3Kg;">/̓OkFɁYY\<47co { YYo]aRlI%lQwYl6 l6[{i )OBH@&޹&'1J@fVDmTtc)BKզDQAjD5h0,/7g&FVJu"J8KJ5/ܙ0:*yF?,ip~wM]Gl+xE3G1NdgO2q{_&: *")9f@h%d.D]t'n[&AWDPT/.rQrx;ˣm8if$uA Ȇi5:֐AI~/,-_Xھ|a!h`^w5ZPXPi$U$J"1-P8scc I2& RNQ9)O銉}1 i.ki7S;l؝b?+{r:P-z^ Vs"# s-("]enm￁,wPLE+Q(ZΑF+BldQK^ Vv]X\iBHGpI2꣥KdT@\(p yAHf DǷNn i6XL] \ց]ͯߏC| ]hgK_0Q}Y\#EzD$ |Uka "A5 o3I(EA DXka "AoŒA5 DXka "A5|UBϣzj;rsjb0<EvJ7iKWz$*{NZekFQ?d|qwh_VU$<.*&kFVHЊR>pK]4Dx&rEI:xP $@jM<(4cf$+0T`80X8ϙfhŀD ˽J9F'+z$ю,UxG.#~r}QJ?O;$_˭J؁-Wv\/Q.7?[ eW+۔* R,)%"H#*J+d>TY =y9c\K+[mr&6q5 "ʸE$Jy Oq(${A+qyb8Yci9ɮZXem$-(S` h?T"|ܬ_X~d}H-]G 6"E-.BK%@՞gG)gOrQ 4o}x/ӌl Mߴ&z^"{F 1a{NY2BG%W嗇y-Ti^_nV<]ȷ}{!pp%(%9 pϼb.y"i4;Ed+"exg,W` C/C!P.-xIS:c!qA&J3䴐2$)"v^#Чfgn_~k =UƃR|R[]xQmFz{Cya^{s(Pʡ\O{j<׉'|Ōȳ'N.tyb_E› ki P \njv"Nwh'ӣ}S(D;5~$))dQT. s'ԓٻ6r$#@pd/ۙLfn? Xu%8߯-ɏemɓ[⣊* J9Ꜹr.ZD(KHp%I1$5*\RI($bYR-`)MH/g`ѾP҇h6\ct_^X7kꫛ ^B^3-7j 3tUb:yACR׌I# sV2TΞdi?dE9F-b/f#`)ў2K+@RBD:FjcӘcr7@bPj/K`ʖAc@rIWaY"V;63sk=Y#ٰ-#N{: x$KzsO *09 yU?/46,VLjM^ba ȼҴ:\MMtI҇y㲎pX|XG!a/jOx; (_jYUʖA#-F#إ5ˎ#fٱ1:ℶ_A&лѣy^SLS]dupťZs~YFٴ$ C;kغXےҫ[:]׌XߌkT&jE#XFr1Г6]ۭ E'׵$&oRLҰJճ I+FM~e+|KySW+߾o(gӑ\LWr8_Ƨl0KPդ ^UC+ab?o|m|o޾?7}C{h^%-ZIAhh?|m[MS{S;4}mWmߥ]f״[q]8Mq ֝g-Pw9d򺎱QG%'GՓ$A,a0/-0&tƊH,cA{}03UwzS¥!/9'2xQH6;%RC')8!lwʹA]4jmM36.ӝ3?T^{uV՝;R3?kv=TZH4*+bBi=fo5T S'PcO@-,.|B :c-P85툞';C$(ZIc9΁gZZ%Da. 3ZB)!2Nj]Nku@) y+b\ >q@-u&i{)hMmء=##MIMح q2i#Zҁ#2**'R"U2$-S$x ۀ 'Ug%DkËNn6Ѳʂv|tsOgC'̃\n=OOGJZYJڟbb"FV U ǭ>p9;>@e+8H 0!R[.DM4yb&xBʆz@>jhIsDHUMOv O~p~k3hAJZ2'2.rO<;x8g.])9I797Pk.@B* -spkl8h[{ukjF5 zEYK4Z\eٞG>.y@̢^ZH a4Ne;w˪A‹MЕRɻ0y?4}z=a7wF~$;ajw>e]$>~ N/j DYihCHoT_V^jTUNL6I.F lw8t^.lsӐg3aݑ-1\Z6̥U^~h~]5q0.XLm~MߙI#|_Q7?|kD /Ы//z{xI*~q#{8)r\󼥰RJpJHB,XBJ+L$L$R^e(;EH,2*mu4VhVL:Al9)I-fр<3r4Ȝ"o[mzl4e lYRyeQ% E'rguǔ. mM1u:`tr8b,)d) .%y@$D,Zs߸`'`GӤ]P#N֚H "&愳9EN܆u OI *u<U ` (JO2%@@yZrA 嬔ɨ xxlMؑ׾ӋPW?jcPwz 4昚㐆ojPVGV%/ڰhURXME1W%H jܻ+qtJ#v[4o&\I?=͆6_zr!-a}(E C~b._Oo]y~]bi=?` FAe|`TԻ8aoJʖyA;B:2Qw4ȵM}37 ? G56 op]dX"7庾 Eu00h |km뵍뗛m<7^0ͯ*f:ns0=yp-,MrV e"FXCY_Tzg0V\ oݘZĀhtGLQq=W ,%os}<2P56 sgqX{X̒M|>HC+<*F8xH.X!YgoXBZO9AH1E 1E:;I5/PE+&O|)Z-r(~w ⸊a&)uS:? [ӷL;i}-5r'&&ԁIǞ(2))g:{`"ӴQ+#xR2{)vT%phW2gsJ9FtM, уitrVb2V!j[2F6V}emYN>,\RT6욜KkrsgՃ`0'? qHNhHe"D\Yz& h\pdYxt̊! &cR)-WJVd3{,GM k;6S[lq4LהCڭJmײvP9«Qrӄ&3 :JxF8$ 7AUy#.# !WEDK&QFp`8&BLY'T mhV˥q?a"[[e;Ičt 9WP\β.I b'9XeR[*pN^ۑ [YI L:/^&Өn%.̭dVA=A*.#Br|؆I9%]9f\0*J*Ù Nr7Ǭ ˀ9*.G  \$=z.g,U M6nURR3K<wJRDȌMZa)rP<@YZ#gdxt`[vx.ЋjqcK|a§On>330#LJFE3 J$qL&G@IVBǼ0rE ^dKN39d1TԲ£;y~7>C]gmvK]xۻqEF&#u8, R'79"2?&tEUSzs4ɖJe6A?nͺB*YQťݫ4_|Ux{wʷT<wK{;z!ri.(k5eRmaeI-Zst=%;yFY`~-ڡgA}6Y ZzB ~YQg$`/E\j=tqU4W_vuQ t\5DH͘G kkՂz6fϡ22R̼**0A`X/3٫^\iL0s|T3Ҹz,&Dw2.bpՕWU{҂ 2֤}wN|t81 )) E,,'L܇-գ_͸CWncD]gZ82$،OUa`5~JFP18 7WrT8Kdede9lgEi!⧿psdLR4߇Zq.M *?x]~C Sjmkôٚe[a۽CqR#|9[qWyTgl#O0YFmЕMzȪ/gS8!S YGn|1Mkج+mu5x_- m:)\!{bJ')h(s*r[O m=-saʳY5tZ *.y}QP*y+qOw9ϊ]ͳzRD?+֏xSO?jBtoc¥IhV7&AHhÔla2~!~\e &/la^Y[*yHz(ZW^4x)^4Rk9t/:'J{ѯƋ-^P}MV/ {4~!k =+,rb61k\ǝ\y?rZ]jiZ^UT?*Oȶ^65}^] ֪n lE{wo`UkvQ+Lt599oCz[P*~a̷}jmSrM'hsH<@o<2\ߡ 68=d,z(Z qљߌrKwLb<VϽ r<èUϔ *9;&&0TI+$bU&22]^]};ꊁ4R5N6ꀐ֬d 4,&Հ9ũ#JJK '+;ʵrXzqKW3T%S; nuV؍\X;>ky@(pm^|V7Ώ?uV/%㕊O떛B7v^cؗ7˾O#Dpły[mG=yaμ;X@x=col*;*CzAC2xrF&́zrJzO@3+$uRUVTJޫoP]qͥ/H]!2Bu=mL6Օ W6k_8]M.xޠERXrh3ldQKťm\{I)h .E* kt :j9r3C/hE $H-3RUqur4.yOSJɉP+xɭi4_/EI1ʈw`!FlY%h`b2 u@]fH IAEF)H"\bAQ4i)pJU!t*P Y#TcFxI.Rg:EKF&B<Kpgl8q>Ґ֔yb/1jb^s~_їT^<."<V7WF@ t/c8p/("?^m/?*"+[ެé=.QYymDHѺ}`8)C2^&}QG PNyIۏFȼ\qz ry`\ '"X*7:YɩIKј T҇`!g s 5A)`9<$>YNg+ٮF.^I%2E$[\F:>ͭn+]So̬;0ї5}_dr'V<(?>2&pJGp۔* R,)%F'Ы<@-hiER`S0|p\1kc"(K "{Ӎpa y *8YsxֱDљVY + ` R7~2D& a -l[p`hQE]{v@|1/uij/oӱg)MUR0HnCOaBg8oHB4J#]ʫs'ϡˁCTIYh*ᄆpuV,^$Wlg7L|$H`6:6ni;6&?gLR>U2 &&~k;{/mi 턷n1MlU j-lLDǗz4 Ύ76 _^dRT8>1W%O$椈`E$"~78qz`kv`P%6(]繞`4\9 wӻӟy_>}e_O?}-p]fѣ:ZKy4 |?{]V]u-[tݰ6.߿Wf"T*D# Kn|HG%'ѽI|~g"b0 a~SR$  \F{~SʥC :,Gj̀Kd$HN*iD yHF^sq\:T^dge9!j>$xwm@2<5˟ֱC9l=G1 br&@MSY7xNӡn"a Vk*>i:8t; pt"W$W8cR\(w+f QdJD[eqxL76`qz3oͥ`A sv}xWlNjp:27BuMxdHJsBFhK,$AyT*)3}>%p$eMT[ \$+FnA)XYa]>&d0H1H1<$Q8 <'=PtH)4>p](Na^x##H.0"3`v.[Qʤ"Gj%Qݯtc 7 a,5*ƚ$I;Cv' Ii ګr{|2G+ ^*)Y]nU r ޽Bx|s:F3dDlixs9#ڢ9~{!z{em5?AV[xh{>\c^yۇ4&XsOѤq27t]2?шS!`pn5 ҽMU?_ƓO ; mL| J=X^yNfWßѴ+Aj_km^o;{.~_:Ba k\i]ݎ*磼iA]gӀJ2U BA*nH;6O׫gpS^Qжeڳ|r4\2:K%ȝIL'Yf˦!_f\XyxA)홢QXPi#ln% 1 %:<4ˌ`1{όr4%D=)`l (ak8W2T=<*qac-X.ĆP\Wyon2ސJn~ػ{ye; *7M>0chIqHHDDO219c2Zd`9r֬)@{. &RڔFd1cRfM{̚dL̶vcX6 YWZ| h&LTP^*Ɲ$e6J2G65`MgF^F2)CdE& HM1(Ud5͇̇Qd]-X8mÌh{Fq([MLC({Ȳ&@` 9dM r3.@t#o251%d"MmN(#dV3>`%Q&Y |hzLcpv3DUgբsiVr$/ʆ{Jbϋ[gX+&=>\7oip|?< -EOCi֢O52Yy QgpqJUg-6b컅0$e{QQE롋K ·:_,-Gp:w7QsÂu>i~]= ?_]/XV`n=7lN?XT^NXDto_WPÒx1ٿzdYx6 ѣ_&_aTҜ ߒU?MqRLk eGdS69Xv<%x.\7x!FYC!d(BPRhˌ0 Mvo eݏM3uC''͞t&ǵM"pa𫋯^|]|K mJ{ x3s:8[mҘZ!R@Te;o#wal>%4u 숤֓K%ﺧbMm~;]Z|[B~K -6^d¾%֛0)zD /9ql2JUErh|\IPE& J*qF&B c2 MEy3EKKPYT!:Z$霹,․J]RsuS5w1"7,{l.{;Sz\`o7*7]Q9VWYc΋eu`'9ɗr./\&_̹9%'[Μ1"e! Ecj,i󔁎: 7M).T.HRdY)u\FcB#ܴJcpv?5)] g7/o Cr;!Hpg*| .؂;bZsE1-rE1pE *" Ldp?E=K,Z{+x28'"do6HQvFnJsgJ>pW >}`F;pڒ՘ EM-&xd$ܛZ+}bϐm\^ ×mbScqb}3[1^=^v,8tf{++ Zm^Pjo`؍Ut0*]VUA WHWRѕq*p ]Ҷ J=]}3tz2W(ONWap;>.:-P"t{:aDWX *pu ZNWa=]FhTT֢)W |B0/pX:SJ39wF ~ *gq8,X8jww]nɐei?M`!YghkBRmR_!M#+#U*hM骠++ŹV]+'_b8 \Jv*(HWCtUv \]E骠4k+S+ ]3`AZ ~Uҕ\!"JuG]\ +BB Jc{zteb:DW`*h?^Pv z|ÄsW>apՉCx: e@WձUϥ]X)*pQtx3NW%@OW؈U8xW誠WWuoU kJo)0{ *Kؙ(ff=f8 >qϫP^KW7@Q?pT1qj2S f =o]'a]2ՌlHRH Afz=O+ڊEV9Z(% B{yǂtdJ:|81n00Rĵo̘DI,,Pˇ3ͪmDŃTIT Y~N%+'U\J~JG^*)Y]Ys%. GJ+t,9,;4TjK{KZr@ϻ4O^i0BWmҪ^#]Y_-U/QPBj'`ߝaf~[\o:DzɈ@7N-Dy X̓2)Ft|')Ry?ޑQ>7ԁy:LtWgoʁJ7Su'@ʗ*݁eaO(Wj} g( i T:at;+'aF-JÎL@:$@%T s6{<tYwEǗSG_<|[_%):.?dLgH"z3q5>Fcqɬ8b)F >HeB.%.j 76wN)U1 edhyEh4gw_ AҞƏ7K}BȊH1H1<$Q8 <UHlZ(I.ӈ.[QLĜB71y@k$Oa0;(eRV@BytFU>֦>!:/THk$Y,pW%]J1H-"I㸔bt&sȑOΪ6Aj  >, z : F*axLpVKFFZițLG%O&Ɍ5Y@vT@?+^~ HAFbb0Ylm麸Q̈́4i`#s"κuѓGulRNVzmalxϏtfϷ&()mis'}k㐋c;F*xT9~p;;D&RFYY2Tk@H:4ǺȨ1XiO Ih* hsޤϹ>HFj0[Tjq(X{,<)KTmvWzK'N~l8'4~8St8@e%*HK "Qqi3;,`yqV Yc(ΞADMЦThpduYHSfhleĮ&nQ\q1մPԶQ3mU*ImMDpB@16YVy$hSUĬ"Cr2Vg7N}I b*D"Gr-1q.fl$aa.9@p!^`Gh&#Wjc' X=mmZp%KIp3띵P<@ >+lXYiȝgp}imIp}+%kkV1rmj.n=i,=w:ئk)~}XjSïk [#860HIwFn ۆir.rܬd¾B Gg~Ӝ:纔F1[WFog=+V?ϬoTZ]'-ctuw_KiTŊo,79i%L~$53( KH۲t[@OG wbVډym䆾yYo~͏=s^vE O,mm(ڽ_ikoҽԵpvNK `\PX 4;K.e3Pcp4Ek c&ey,wdM/gF>p%SDKQT^s&5wIIP!B }Xϓ@L@y^!7/V-^.`PY/B4 2K7@VLS O*90V1ޅ!in_=eG *tG.~;;!>q[ m?+x0 ϫj$XJb|ʢ0ߐ`s/D_nbP3kuL*$> S<1C@ƌYz:"_T%3Ϣ|ٜv 쎡CKzrhӲ\ےmN/I p:/g7Os419Ip?v@:3~1ɻ_D8y0<={U ?2;>݌GO=Q?-UsyS@`o<9/>Lpt>m\s}5ʙ aZF Gƪ@$ru|+*'Y4Hd«,T $ۻ_վ_*oiP!'-r6!(^-Et\ JǸV5ɍuv3R㾂4ew4sTrTrtrXr3><(FbpykE(YDNs9`%} n+4PYV*l˒,si!Yk1 Fv~J5qvP6xK[ofpeoJur9\.rlI>D}U>-(]ݠoK]^b9ܸF oBldh|(v<t.v(v"jTldAÝ%s4Z29Cr^mm&D2&T.D饊^z`)jȵ{g7\r dg7K&r?s'.cf{01㭒_š3if"VipEJg}O}Om j,x5C?JPh)7 ~gp71]1FdcEPQ[ԁ]ꁧ'w#WGy\*Zm+9dBEAu.C+T0BS:Sa'}+^tG h8^9_'<U[IҐ7 FdJ1u7Yzo1`ġ]B=a Y/b,~zObǦr]"p{u/~z隉{{ſn4xm}:Qd1A&hi$nRn' S[ 'XEec hN{tΉbĒ7ɢ{!Cyn!;u )B_=<_~K/^ f>sJWUP (1 QJLݯk/YiqBW] Fχv58KYsF2 y:fOy*W4u3M#ocuֺ/zuwdE}s۶7Ѝ]uya籪M5^=}=ks-?t$nn77Q)t۝ƣ|"՝'n|+wkF#N bg5qNܴV> ;]W~{dRDګ] _1~Wxs2-;Erua{:7Vv^8[_%Nƿ'{K 4zk3vl3J%? NYT+-ϧ,fwr`סnˮRSJ5J`6h (uACy"@`*G.CNqQu造,>%.@1!.2xmPI$3|!st,+7ٳ>M;p+s^d M) newyx_b^ˣM:C"*:A& wq1B3k9;ΉĄ )k%wMt]{7=;<<Q1 2Jch!efeE$O4bFk*TiBF T7ֽJ!!O#e,CHeWnGR"^2k12ݫ{miyD3W*Qlܑ]RJt~]A pL !RR]C:$:RNkFAP$_d/7KJKPjض۾pO3n/m_m_0vq7*qpfmGrI",./#rR1eazo\lXi,0Tk>M˼tei@)ʼn% y K1 5alTeoM.ד >f5ou\|<<8g+0eܘ*^]#Kld$ЪWQ7zvi^C'>u62ĭd9lC~Q0iIޯhnzuetjk_&olҡ!0SR!pX$SS|~m^Y)u%\-oW+coeqdq:aMHSS7_)`:+BoXIֺU骀{^Uw{ 7_8?yy&g/Y(OZwno;5Ҳ+@!MۂܩA' ]e[DWygA9Hn_輌VFZ_$'IXr#)RQF(e81nWN1"L:u5qOOyuUb9x։Qշuɷu:>oDJs4O\|kWrn;X,Qd eJ+gIkc%oZvtB;yܳ#tw$GeVHND +l j R S5*Cz'ӨdrVk/;>K%c'u֜-A.1COWҋ1z^u8d>GjC-_|xDGp9:iN=! c7%XRJ|ߴM7уsf:r CPO-Ҁl(ē7 01)< I`8_C9&r6ȹM Ȣ` ,rbrX+?Ys6C>$x[ eyh'% =kt*u 8?GH"܍qpLpKϽ 7$ҳg'kK(AB8/Rѣn_8d>'?i`Q0 >k-UXz0Ѐ<4Ț,5t{'7Zm9{v 8M8'j#1 (=BGT1j" Eͣ-Sc'&/''uʳ'K3<G񇳜!'vQ+U)2™HZ(ą\jb'qASIqvy']YLzT$᲎}`YoGRFhֿJ$e4RpKF.KRȸr)&>J0GȣoFR4[TZ8,|6 .؈mEjBH63*we&#QH%92R'k5f m@j^˘LLVHKDlYo/ %X @[GRqZ [(ߢ wQɹ7F`u&l89f7T=9;u¨lFmڰBSޚ6'1Rhw1a)1.WY 7T:}gs2+lOVծj1f\{c{}MzYY mpfQwޒķ K^eq47nͱ=;)\uN}Ԛ;dg9(?l.$doG]uh9L-@c\xׁc`\b[]F‰uXQ6O[iT\(wp)BSϘ8P=tHbaJztI%u}ΩQϤoW@c$frPb'6ow?BGz`7je*6d]_[ ̛iCZ:ZJ|oGɞqDajčn&R 4>iʿbiMWoAϜgkL%N'n:;TMy~[ 0Z~''"Ir(!cMiu/;Py sZD)Z2li%% ,;t>ԭз:@8ɜHP\3O4&xZ`@Bb"F3;uL~c.sCNgޢ.x9yE}x}gUbOs!6C{$Hua]w<Gy}{QHY'Y;)?"_bdĨ 죓&(`R:K.RN8%!&DJL) cpVS,[0fp]:t4D{ bFY@PƽuH ÌМ^5gln5_>$KO6\R7Uhu n$.+G]bzE3р8„YtwP*E9$S14)u@ƦpQʟC=y}‡_G}bwps{Tx |$t>ߢ/ ^ʧPL7xM_O= _< rt e0:4oz9ʂR-:Ef t:hK_[WC ~/Oׇ&IOqtm?LR+ .B,nk8"$w 4'x{wY,ճSB^G8svi^C'_M馍 i'Sg[If~6gtA֯hnzuetjk_&oL،J߄TdHy$0GMA Ig*Dj䭍P('=¡a.e=I0pmtvOR27|1R(Qʘ Eq(b :bEdelT!ٻvJ_.«,jIɎ6^h(ZJ ~@wZ(]QEpSIXHTk$d;=0EğM\ؘTNL@YD#&Ƒ;dIz 5 t;A[8q r-d֨+xkf!2XK WI |9WPL|rImEͤ/_r{5E>?})q4#;z&vJkvג{W]g E2BE1=zw6YP~pytН[.MЕLCqB>LgWO~3 [,=M\ủIn]D>MJsjv h% 1M| 8< vх6.C}"kDwMm;gafqbD@)hG:.>j6&]mjqw#c1&߾aE]v'&j|SՄfjz9|6xRv{b?.;sleF+a(F~O<7)ȜB"E gǶhr`p%r@%r)!3KTYY&*2F i QuԄ7+fƑ7~iO_ef57#~&\b~pLLf`z_\qg }%Ӎ dB6҅hRf;jS~O5Fp"Q8VH1%YY< GqhS&Ej,hKSsU\Z,N ^F>d:|ۗ}xbtq]R%`W{zUSx`TAi,8hB#e"_R%R[-zu9s@gbrBdKbK &CgOg srd*hpT90WY˳=Myա{{_/{>EQ+F91pY NZK"6—ҟmD_NLEx\)tn$;] FZ\OOOs&*>ERg 3%Rnfl۽ب{{_MjИ:_;VM`}R؈.Y⯚{όYե0OUwkQŪ.Ȳ UBip(f{j#Ů; Tiھr(~jA i݀qdM%-\~5Hb"m˳e*f`āw,B>FOk[;5;?6{XsSXpvE&Էzxz_'y3j7S޾b4yE;O՚-m;u nyEnF?r .l6-]҃.sN7ÕWh BY>yB^╳vN%Ҹ\gyb>4;NSiO¾ a`2rХpdq;1{JHdkOmy$H'@zU@7*QmnlY"v$PO8PX/9J%1yˆ-3VJZaKYMכ.S + n$9Cz`dXp&,e-aEiv+ITJluVTU)rfBY^<)3WἉJ2sdɵU}eX%* ̀^"=܂^ZHKVx V"r) ^;M-qz0m[ 쫩riʜi6#2G _Bde2&nzVPQS)ڐ.qt0QY2DZ-h9;7L!p{rUTLĜ2B)Gaʊ)2hsيN*rU ~I)֫vb40ؐtP9r^hi QAAQ;Cv'+!D-rԱ,0?2еUpj벶/ˁϴ-շ' I_ )L{.ߢ5Vkilf F:⍹)E. dW-{L1em[nwlң;Mjurok9]bwZ {7\}[4x&bq:\Pb2?E=}55 reӿwgM?؎-qaIF'GtXzoƢ)eYkBw=nѷ=P/S2sc;2gbZ@.OʻrWsި+AeY62Dޠa!6c]Ȳ^^A \A PA FA &Y 9"WF{PQ&SB ' U,jTULJ.%r)"۔$Z $R&.2њ=q; ؟6^[RpYo-dh5,PYzm8 B%#'EtḞRBt7$3V0=פن2Ɋɉ)&MVaEMms3#Zf%͵kWC9\mೋdq?VmaxoF6=G狩"p**o}u٦\2*ʒN %d Br dc dԎ&È8ѧE!uV%["m_Z'KdD & VrAv;kB d9O'5ҏ\zgٱ-xF l]F>C6#t(CO?>Gק`C#F_&?0ƻYt9dJ.{72PХՓP;͔׷,=M\)ủIn]CjI̤pQmjqw0rv60.A̕7d ]׼6˯_Mh6NkQgӏ˳IO5|ҧ ?=b VW/3p8q~^zUp8)\# Gsͬ0{WE`mfoHkkΤ•T]`oA_twMjrp-u|7$'` S L?7ki4OoT9;a\RJKdYo~{kP{$lo`tv60]D6 iܰ=+X8E\WEUGVJ \R*/pU8t")K+eܞ*|oa?9W%Y*T`9?ug1F?.k&rS!62+x 1`T:f?wvl{ pt ˁ} Yv6E/Wٳs0a}oX/ҥ7a[c%&m=`* ʹY(aׯcX@KsYɒ1%X!(Թ^ʘ ')$$i4Rz:}i?59wyMtE }yw{160H)lGp˟4^W'2OdX"BЊ^ }:A+RZ=I rswKmļi2U>gȂeuȠ\E֪!4udдʵA?iQhwwX=Jo3 G U*ycJX%o7JUFM*ycJX%oԵJX%o0͗5Co[9^bU~8*8r⓪[.\WRLR{K忧1J$%^VR#-603ݍ}vɫ]j%vɫ]j%ZZG+eWj%vɫ]j%vɫ]j%vɫ]j%vɫ]j%vɫ]j%j%vɫ]jk%vɫ]^]j%vɫ]j%я Rz9dz"3&q܏y\4ÌD&cg|mԒI4 d\8Y_Gւ86ur:U&dmY?<~ -[i#GZͤZ{)~JO?*(deL)NiUIJOs \^0NZ/C W\$4.R69XoB <[:d3-2#"L#J08L/+ ϤH1N0xǞΖߓ:qZBz^2#N?l. ]XU Ö{N>CѠM 7@UgT(EPt.%74KACۉ[dj囹%zuȞ^Ut5+v <`V|LFS%͑M(#.?w`sB6DI)c j@B,WUཫ*{TU`!z5XIO gg!j!2PV輪D]o2<6B b.|8BsiIxʹCh7B2O{|YG7_Fqg{ގmď~lH#ԇxQxzs啑a:젷(=ᮉgK|6c':ywԟe/Zz 6>攰u~/ϫl[υ{,l}1brًi[V b*bf("N]J,\ eG7B5Db҉`(iD8wo  SH$v1@EQg OFZe<*iR,SggɑqX"RzY0E.<Ƙp4l(X;[~ڦ#[oi獓rd|q99ӝ.7= VLHX#DKӵq&WW"}X?O3mn]u>weG#Kìጅ(;I2Zw:y~Ann`U. qH mˌ{葓^Φb?{Ih:=mu/kB |Qt:o!Ӗ G"b%44H w G  Ad\&b.e,(Rނ0^̼ TV.-RٙmG&qfP3dtGᄌ>P)+9MQAJWُZ'8:%XC˻b9".M.Q)AE$k={}:n,jW iʡmy\8-lٓJ]?;0dudQ',%, 1ʠe*jǐ;CzdB? YfNxkbScBXZ:ؤYOPt\p5 2 q\јh@r<9MH2- ۩w\O'?Ƞdw ܿ/V[66Rx%u虔>x/GN 8c @m9z+BVlx[/8CqFn&X3);ɤF7rZu@W$B]O]S!*쁨똥ᖊ] *ƴ9ZdEmgQ[lAHKwmV$JW;DjmK9gu2[^BAҚ@$ QgYo e:g4İLIhȱ|b nǧ8Ÿ޲r]K!8`3z ?<"D"yUA;HFdJHƫWK>o)Cс3ncpfnT&ˢI M[HF x9Aъ5?r(ٌ~% Hn뗙'v7^b lgm듆Wn:z&Pb?ZfZ2Rrj=׺RkZkc8t'gu:y嬋(K,D#"@ ,(f"#GG@#D#D/)SO-1 ܤOr,N0 UlDtbe_;I;G M4tЬx<-hu)~|Y.r`w]BúQ3!bK{Ҧy0m#}i|9#Z] Z@h|?8?oQ/h1nܻqւP< c"RE'-;5~xᢳ]Q-Jy C7w֪/ÄJǼ0}q%Vt sOR&3?\}|4(5&΀Mx۸ϧҫٷ烟I;{zVm]6+YkElwⅠ֣)?ʾrO|wy{O\Fсg$ t(AA1 "\ ԳԢ6kS D֜!C)!ei`q^ZϢMq0H2{5=<4vvo˃|/c9;;<]2 MȢÊ՚G2jfQF23t6.XxeLYK IVg>1M&'p\*:Ζ_^ysڋRaWK1Ţv\Q4LyWv6o >ݠxW:; 4MBPM-^R8&+?W=7Giz܄8z Pӫ锝rO4͘ه1,˒@G+MufÙlJ}BBqHh)]ldS`9n[J5ˮD4# :cu0Vƌ"D.P* ۠4s=D67 /l\CU Ϡi}LSaDJ*!J``^kS5xܭ(xXw.- -py@GBnK;pWTkU<(?!h|)Eb˙҂IcDsEk&NʦxQ<2VճUTLF5\:P.c^hc 7BFMd`S5y:ʊ 0*|v8s 8.Ysx7K:hij%*ѐ)9Ӗѝ9֫oubjta8chߢ( `K]("$RnitNJr*b Bjer pUw[R*Qf^f"'JՑ^gy ՃK7E ?8oؖiG x-F3unRd}PmoiP։&fZF[&b?{'ZhS*|quDPABږPT]|Ja[pO Úoq[lGi+=4K'r3gG^Ӳ3CKhqj8 Mg{h9ޝ.`g"TYP;K !1Ԏ޷E10m)}p`3kH9rvLĹ /n}+(, _uk|MmN|Axehimh/ B4䠹ρ*Mz'zqOzNc__8MGgJٙpc{@9T&{PP͔GV JsLl8V׉+[2K'jμƚ7]suݴ2څU}zVZs)V4DIӋ_ɢSӈiFd" 2qVZbHBH 4c&R$GIGlG}gq<%qyZ5Z3u>+jr4 Ζs`ۭ6.]e%@pKi^Löv^KR_X}|voy-k n:$e-GqYF[I)(@-cQ~H4pjRVR9܊q^Hݔbm%[Db1pZBB] '+&=%"9@ B퇆XܗcPȸmqT |~a=(2hTـw4oJDAKGggƢ|d(i)25k @*? >CSmM#6wTdHyD?\ Knds*H c2hpJea8 LT q}lpB) TJ?.?fc6&m_+f-UMlb͸/|h:O7xޕAeշl쾳aydx/Y`B?<4 Gw$djx~f8RR23ʀk]{oH*Dٽ#@pef.Y &X0O[ꑌs|Ȓ,ʲLLqb.6~U]K)(wf0㛳,u^_kUÀP& -,ZUM.\7 #wQwH|WԤZ|&+§Gu!dצ%*ʾj~[Zbr6iᩪę;&31b`%+P!e'J|l껳^_K7կՕUup)[318]^U+Kʮ9HъIoLZq(^1fu&w4iO#q4ŴQU@b|,vdpkt{Vg>:isVKR=KR{ v*Sm";M)VS,[0fZs㴳E +Pz f*hSʽփ½uH ÌМ9d)CX֞g ]!jؗ=1W,d.EZ7L Z7@̀W ?UǪ0"餓sDL:ck9gk䩋ӹ;lq@Y lr1VZqPF2 4F΂6P"V6m黡KZTSuYڬl*/ )O^ :k9SBҊRj[\``:іu!3pS!Waa꾂-vr~&*GD[lW 4KOǡOVV,/GEe,:u~? XS<`s-:lK,>+ejD=kYmecZk#@H.4A0ɵ r 3f4Nݾ<tJS`biFoEMQ,M官c`vJc_Jy4i} E:2,6 \>o{ջ{~*ZG8[9aA;ht \h<*ĥ>mIO`&S~%pBfo\ GXz 5V҂{uq3A=EzuE狴Hx _JUiZؕܐEU&E-#,>9LG99K*Ǒ/?~8j ?- yU'< %eQoYROm;/^V_jl>H0meA$vM4~o_E^-_n E }YRf/Fm&h^Ts5$2|gJ 71;˪CV&½ gpcv3]V^ϦG)=9Udi>^$8xl\P/aN [cI"!`> %w zD:c|PC0 djR$('H  -L}8D,r 8M(rneLEw9lVJHH!]3tF ROoG&>KT'G39l{Fn%8Xjdi)>= ʖ_.b(J3xn23ŃUS (=  #h@ޅED  +l jp@lJa^iafSm8`)Oc.*S81VَQPgzPTڽ8enꚨ-POI|&Htr#TSty wS GB]dznp1rE|)8Ϭh@' e= e= e(Z PRMaXFIiT^PH$|d@BOۃ y:yjg9vI[rnb>Zb']#op[{&"'ʳg`H`\!rfDa;=w~^OX\!\KnX nyɤq9k2r^Fk">v;?t':⪺砬8?%sjct\R`Q=R0zQ aLj2w4ɠ"h@QAsglrD`0Hwpgܿw⬯`ϦEl+Cozd(eL%Utۆ;RG>G"*Vsy";R܎}#>^>58AAO+q!0"`4"_Vw:6h)P5W| TJ}8R47V]T_vQ\Og ΥށT:XϜ'YJ(qNv EԆuF"$^x `x`:J+)q"P9 ;R[V[+-;-Rpz[5}ZM$wB\ ZqO+x`8)bS+UwL 5XzpEJQr"FLg+?V3Hh UpDdL(SZ["q/6Ra<'ܪ"HD g_Wƒ~8miL@&{XtF‹otoæo٬Тe? Gԃ@QזO ܘDvi7.'k;QT'b,@s yT.56~1Q7q .Əi6l2 f|00ߑ#u#WYffzO3.̮LU A}k~gL}]+ߋqxӲ="}.e4Y‚6ex,`-4QSo:y!P|< ƅ9AsGCH7 Ó׻è٪uiAeA<`?ְꀵ:#Ccچ$Qq& IoY4n1ݔF^_e\}mDA9P}@\zX}>B[)D!mFն.Ȟ:ldҸ{Jlq"L|oKO|YԃB|Y7Kn+y鞴a);QnGEG= Tu?{ 3ۼI+6Qrd͞3ӭ c͈NDt5"zEjO|()F1Fɋr%-8"A.:IltHP7)Tx8=__4NGGRybxsl!v19e- (GA!dYcebᣦ$ :z]a~kmP7rgFFL&fqN<`AyNYC3e!Tw_j9˫MBwnd!;C9""R4M$T-5|< :gJD^^[[)J"uy'֞$ԷN Xng ,˦cl?w낏VR&L15B,.SE57V"Ӄ:S]nIs9 ZV*F+]v0aɄy&Liqk^5}"V+mθ\Z'\=N w;m!ݣrrݾ3|OUy+Iimm34qy'hyY4ǤS:Q`KlB O0?seZೀqyֱo3rVҴGGt;}Cᯧ#3垱3.ʓ[0E]JA%33ߜ 'gaZ(85iaתnrjEEP,t),ޯV秩g&+§Gu!dצ%*ʾj~[olbr6iᩮٿٻ6$U' #!@8876.}6)qM IV|>$%r( Y`jUnBGc.Ŵc5L*߲;=Jge^? 7կ囷ŋWr)sit=(G9H?] O' e-q%t63*A } ,?n'=ٽWunnNJV'\Ҿ&oa"!12`.U}=YG8YW 5xSTKoO_^ȥIW .NWNlnZ}'d0FeaC$x%UNXv/_g{uB͇ߟ9}M}٫70Qgߟ o`^ru3 Dg;NzDӲIT [Uku5vy]Auvj@mgE]zV>1wuGI=6Wn'9)`JPE`7%$V=S!R=м #vS;&G@h e7|3R(QX;(CMq."dS˾NJ[ߣĶ]vwT^s)gZՆ q͎"z+y_&uP+R3Ey:f"D ½HMCGi0mFm3J !lJa^iYrVk/;>mJPUсi\eLOMG ~]|6dAZ|"1K gX|(,5 r~ԗs?_$Zu}M^oXҩ69e%ՂNŶմIR\y MSvnI@5HLXfʫЦAS@"'X0]xS ow!X2^ y2闔{zv#'_޽ލZgnQ:c2(PV$s^rVwș' Gԯ:/*j G;UQQz+k8sJ"o Tfun=ܒ͛}|N9- Kdp+Lm kꍖRNYp:+! E9IŃG^(^랂 x$ )[yhX$"﵌FMLU!-5)FΆ{e+!Ae`7RxHF 2DqJ0l28%SG-uiCxK8aրW&) ]QQq!; IMb)sR"jY/HaÔ os7mʆ`m|rm$jK8*Ffd j0D6r#c}JmXXglf,-ijuҌd,_IׇDnGn-@nz^w4#v4tS+s6PdIxF4ZpXF"IoՊT`g`M %R=o"V!!S}BHL~gN% 9 4?sĺ:FzD|`-hrY|qɖHk>@I-.hNp>R0L*0C@DY eXKf[\ .6Ywlfon8BB,[JL.0"~}{&zDYn~ms?+ X7&Dҽ^.Z"اG,&h,ju `"{&CDT)h(sA)䆘,s%р~HNYX&ͺb-Pn-"U6Ez e%# _>,0!:É*KmDFV K-U$o#Uk*&\W[|9t+6⦬9~KlyJu|3tI:PXnmLypCO,`1X>^b cgmI `#kY"M-{;O iiY{k}wF;KVz&J)M_8%{xG{c繊qTbڬx#Ug~mK~~/0S<=[Ƣc\8߭c<ݹ }μyEf*Rb0MacSiM2/ѧUu'PYk|M1F8듍\;M:&y 2+KY̙C:jtr+e1z彈؝,IQN)93Z0}#VM%F|2_/ rOK-OSx0<*S8^'.{nCfɾ~~8+}) SKBd้䜦x߫ :*ŽXͿ{ݤTU cj4x)ӴafTﲇaw *gكt^v~ѩ;C?@DT@eM\䣂[jb4E}|Z~E:Aa7@]ih>Vo_\Cj9˓N hB}w)QjNΎއǫ{N=QK*A[)g1Hʠ§dK!y+xZyӽXJ[/WŢZ#N`LU"=Q+t*Qq W_!\qyH! @)tJr}(pUiHT'Yo+E)ֽy*٫ G-eAp'E_TG)Pm4:ϝO9Z(B29kϑD:gR0Us_ SMX)1qH[gI*2)]_q¶zi TKwksH7=K}3S8EӮ FtUaIT Էwl#mMDrk)qH?*򟨥U$*9jPH#&j'W7rOM=y}Wsm9+qH2jd`^`D2H,JyuSmЈ; P5W" 2/Qxd!hZL^jʈhA #(H8DkεU9QBXtKntk"]tx5[S7s]X{/,z(td3 kvNi#p|03 GR!< ST! `E%b{A3ak(QXD ƉTkC]:&i*sQ e`U}my<9]қhb܆T 3D.(%xD < ?k:zj jgbNHkm"_?{t sA3BìVڝ(gű~a5n;L0Uws,O'Ʃ?Qr"'œG G+R=y9ȣ=6T ýNM!½IeJkKSD1%F: O͓q+[5W/XR/ "FsI4 "`7I}'vF+a]-2+Fԃ@QnϮg8+Z;R%pod4wRWmz7C2,W^Wx:R |=!Hu/>{;MlTsWAዿbZDw֤ 5:/iՈ.S@OQ.Q"[&u1"]2ŻJa|c\X48hu9Rrwxzu3[.i26Û+ê܍6m8iȃHM*i)hd82Sžwa]|bm@ MqReBl0j5?YkFdCu{lf_{e].!2ͳ>3AԪI:瀝(.߯)!Cwx`9/LCc* s"m˪hQG9 I,,H!k [,|#p>),J/qex˶7v_λRLo3.7l8J^kT$T4G8NN.Rb.oH.6\n^<ݯW.t T@]j.ڒyrZʄI& ¼&BP]%ܥh-JRTQ'|T>$_9;w؟V㲹ɅyrhvfIF9X+NȰ.qckc(%*3"K(A*DS /:|P45\eht8TxGDJNcRd 3â`uU$"#IdbV'{~ps Uß_kLSE~pao2%9_Jه ģ1,򿟥݇I_za\Jz>wia:3L"F TQ1J"Ẹ7[ǕSL,s,:ʕ{J,Qȕwӝg4ʭ;+xS<{ Jle׵+9 -Eem*b5J9ZC-ec*mW_oW4JPU/̏`|Rϊb۰ȕﮚW_ -?6伭g٠~ ~kmK؂]JolOH뱐uŸ i_żt[0[{7 FT5/k3~7=p=F0G'!JQ8\}aԪ#/ F^B& eQ,fʁ䬲KqTɳ_='AKd_0~Ypm6.c{kR/|Y~Yqvg}ZTvwʰϚ+o`6Eͮ7ed:k &(^cdu]x nXA/9RgYX97^M??cWOi_i|_ƒK/T0PǣTl8sJ"To )>hhh7CE01`7Z K95Ffq#X B8ArP< x$ )[yhX$"﵌Up̣gUHKDlx QewtS|HGJĖ!Sa㕑/r8jK&hS;i nE IMR{,@vGEgO>B" h{+%R.w\lp9:"94W([l|gETމO趪rui&qT|uږ1X{`k-jaF+K"x) :UfqeѨ)aXS0Rh).fJFHqF@H1rK%y#32x5sf"(ܚ19;f*ta6W̺PtQuᜢliBOדT_ҮČoǮɮ] k@GΨ PD9dIFi 1pE1tRE#eHΞQk %ImR!SJ`^e) Ӂi]IR.rkl;+Z;w쫵UfNZ`70J?{ƭl[|>N`bw{Egږ|%%sq=lɱ(6 Erȏ!DPCX24)\Gk#]POj Zy.wEcpHC&f(訲%IԠYh8%Nv43qv&N'=TzW{z7$0*5JFv{>h0CJ!%_S)Q:4Qu}r̡Eȹ(R?*3s/!_Zo_uXk mvG.ݙy U$^&X1mzf$R9gkbe;@^9,ίu͘qSc^0by`xD`yq:q:q:q>2Z4B(df ay6X!r#m$TJċX@:bGɣs,Hk,Wk}pt Hw]q6yۣƏ%e6tMp>/D}PS ޜӛD?tw/%aT+L((w(G((QGuRg>%}O1<3/%Yc*pA"D!mԏ%&j`jfhitx)DoPKQ] 'В%ZϺOZD,N6^E/!y? 2ve:Y㭒 _u\!to1=y?^-hK?= +seIϽf"WYPYLIG4 Dy9U#GDV`Хl lkrRh&j2ԾS"+A -ī}QD %AfT~pv&Ξ6#N>*x߻3ڿh|c[x >l!hhPR9NDr R0'z.1hNSRr]ipqu]Ճ}y_ua|Oβ0δ=]_v:K 5/0i /5 i~/?4ix^io?IJ+i]4H(H[]5qnUhW Vo[r˄!C+-+n.0aniJlLiClE~ƆsUƲoE-xW-6wS_{4*{#*{GhUtDqt3y7 "\Z2'|'.,8"jȻ>j8ZwR:;՗W{@DR*^>S`4/ W'p*"pRfW\= e8#*GWU\W$pUfrJUب*="iOڞWUJ\}p)+bJ_yS5W_7F߯\-4 ګ$~+Zo|h:/&Kz9w4ѴT4 ۏv(oh)L>^'Ej&ԥnNi,Yyz1&)2hiie 39h5{m]BqA]݌c*hvq~}Mk򖓪iNw~'C.Zf l̗ V5l><+e9S ]qrEԗ*wpQCis%]ܻ[o'o~]hq5z].Wm>MJMhX G 5*_7y>}Px]EWI#蓎;8"TŕGs"]wJqЙB @>#ŕ<:wRZ;WWRW$@⾰ygJkpUtÉW W(-ޭɛqVn㨎tV%=l>o_I&olK:?cNO[^/&V/LRsOhJggdnm}<f>11ߊ&B'%uq1+^e4h*zOHjGw ; i\6OL&o@I)$lz<Ɨ_Zms4g|ɾwM8㧞Z ۾>l d!"dxCښYtNzQC/#jb.qjR*͎7V{9YsBD!9䘹𢨐-H+dK*Eř:!B#ߛg_@\Φ?/{}e]Mu+VnY~Ck)K9ے*E7A.-QJ=%P:?Tt~@`ҭB\ɉt+\$D̘KMjҘ $mx$#!L8} X!Ka|0]CG$ɋpx4sULT.P7%S@6TH4yeUo'eo=Z:H`mo3 V.pW 35M~;7/߀}J.4}@U3Zk!# }J +y]ݣcUr;DaR!1"B""Hus6H]#sNu)%E/򨃱Aag1M#N?VOoiPp>JЮovjXd|k=*ؐf6W Χ{Ya݅zr=o +^ilo!69'U:v+^uNI riʨ4Mki(zlGe}4 $Wmx?[;=j^)}>L'--nOr}tUqJw+*Whm ڵv6K# ˏbDޤ G]28 ȑe n8q2O7N5DK`tL^[c7I%" R5]PTr\U% &O4I8)6IHAU1a҂ bly_ba[&c4Lb&ӳym3/i\X%=D4L:xXA2^w0nd e m f ˭O%gb@s2;$ YdsFX 'LGRJix:b S%"t"C#\:gu}ӝ|ܝ.m8̼gČ%Ə%tn6tMp>/D}ic:> (EYng`:;DgX@zBA*L{y&˹Z˵.{k]#zDԹ >%}53<3/&: DbHCT* "fhitx)DoPKQc*ؙ8`}bL<^2>~l̒clGj;##:Y㭒 _ 1'$^3 +i,,YT"yurbauǕҊR۲(`>і: ٩}JBSf^ /_bOGYtD],D= +Q]kL pg^ȰnM5X0;!4I^Fn?ZH;R"3kbҪNX솀 Aa0 9f2 .uXg5e5r i!Uq劥3Q2<0eֆaۇnﺮYsaYtykS¬#DHI(b1jAe#H(8s+k.,ƶ],HFǤ"9̰4ZI%pD0AP8@2vc)MU@W0naSeT`cR+ )|P<&!TDd&j}D m"yDKQ@>@]q[zK8ю3ICE=#HMQm!18) @_I Z65L(7`:sHQ ZBBn*YDGB}[VgR]|?8ЗFPpeQ2%[@*>T}OȒ쯱rEQbqoϓa^yrP).8$ᲂ&y^6*A$43x,F"rvKA#33K;>@ ^Ϋ{J`! H %|Vu TaBޟuFev,-fDZg&Oهp]BT3UWȢϗ>Mե\IөЙ; X~KxI B|JwUy2Lkyuq|vQ]xm!>Vsi>Fq8xQ,˙]1B4.h10zjƑ>e0yY]*o } 3f1r`V~ɦQDM?|~OZk)H%K3|qyS|B)]ˑn0u*d^Je-̌|G2=1fl-*4vW$Ƿ߿//{:Wo_/]fFINpvn ?~вazhC-u5ᇌ|qWqݥ/D 85)X(cU#*Ռpg8H5@C*A sn JH T{B6i{yk#F,Y}q}jKFZ ܃٫#u" XH2*F)c60Uġy&prie?i\gf;..5&l `Kt3o9iNyCFwN<[Ԫua\w =} 9W׵9?k5bR9c6([8S I.Yv9eLP|db^w[L`ylqHԅg-R؟C"gN\!rfDah`)wNq,JAcCRtRC2xh, '>IWZ|UWNJ]mO?|vQ9(U A>JsMuc-%X`[5zQ cTP4ɠ"h@QAsglrD`0HSEL99xq*U-UwUގ)ʲ͸G,BJ\|#+>>#uzs$#)ٱT`:\qp8]ƑTo&-Q{sp*%Vl,Fh4È)"_׸h`4h77Rny]p`w'-v#ۥ.ǝ)z8ڹJ4V|03 GR!< iNRU6HS&He ^0<`0rm%aE$`@6h9 ;RxخYo{ ]3 YSCɻRpBw{+dM"W<+x^1婕*P;!KA]TQr"iG' ߋFKN^">D+";oSrGRQ{ZXô߯:u@%ǣze,) h3Hcb0 hHS b!, /z}=d0}E8$ehQRFv@hD/!Wnv7i*= (XG@fk1 9sO<*?`TpƍAnuF8ыۋcRRS<5&,+=#>ep6>+Ʌb_nLbQa323{}?'ɏxؙaDz>NV9CJȤQ^ \qXt ާ`$;*oV4$z\wK_lq7=ϛ z}&fzx|wc)t*+ˎW-Nsz: w;IU|yzb)u'&.;(glwl4zӽiv%'&ɍ=J.0AU{fOgƼUiDM.нk] ODa-5ZjJ|P)F0ir%-8"A.:IltHpr/LCo8op>혇@;NxM%-rk_uүa:ǖhިٔ(Z|**ַ%~u>aZ )"2>P x ?X !{21 2=!݌nphi2lU̹A"D9hwZB 7H]M|e…Mj^#2\*밊a,%VU$ m靸X2Pa}*cc8C!f68R$#?g)Oc.*5X@UևM 幰n~.20t%^ܖQ0P"K[f p=˲a )nnM6}?Yf?\dO< .<{:rqa-^4hZ:GB3q$By&`A۬*M;WUv4FUiPp+Lm kꍖRNYbD'HQN`Rz.C0!B*Ci530{-cQ9D]o9W|ݻC4L2`20%[N,b%=,Ѷt$R7,~"a X4VrL>A/FΚK!{cG,ڤO_ l`LQ k4slV1od$qA>ĻLre79Q#((,hez M98895\4ъ9umϯv% {@p|qu>/6[L yrm{2T&XwKLOQT^v,`y218WeNV28$SGQ)'/RBr.E@!D*$GAĭAR RmB*YaR qƦX cѓF/f f8mKL}U^< Fz#v 5IpD ܻɲ (Wx F M8XVH35:AC8 سp˰ R Ƀb&&1n"72W0b#g5b0̙ŸcSօQ[wi8Xgȥ:ƀDVHy7 E ՚HeJ&!N!hE{ &`ED1 /XģQ91 30 "MFD!bKuɹ!R}yMSLT:r^"B@)i@("M mQ'&BP3.8Q -iX3K#g5"~>b=MK{ŸdC\Vu-q=9`ZNJDeǀ/u`Q/FqwM V4iȕ{q }p뜵Ym#gL0gL`⬝Ufם3vߌ6]/#B{W\WZpT׫;Bn\5&Ó91|'#D#v="'??~wVPMcQQ졽+!ԪbzqKB}9{72؍ˢx9  -Ͻ0n)f3סHqa ⨚潜.C y٫?[8voTd?^6l%"6yG8ve/w0I(ObYYDz%Xv6c_"U}t_܃?j5|:`rDvV^r|.p @@"PdM$(OZ2/߮C O7AiV <)$LF0Ƃ7T)ǽJNOhG]B{TVU'>j >?S &(Lp2;Gǀ'˩00#g.G͂\x7W~QEu0:ɏi v`Ҿ"[,ӂ ?U0LN8#)Q/5TP+MF%ΉzT&vSxFABSyL"h )O>81LsQ%Syn ɭ:+ ArG h8^=3rkXcX"2?TF½ !!XcT AvF9™;#T-G !+CzRTڲb< Tij>oTo~#DbRx.`T`ƈ}q_ I[eM2N{O7KqB/>-/z#[T.Gj~_ E!M8O|,QݰnV_GI-Ard?OƎ샳4sMEz*QMhf6j{3Wiү:ˉ7{ݩӵ^2Oa&d{^35rس,S> j/K\JB]h3&Աqz;mo&e$'O_S%͙!Oe{q94;ռ?7 ۸z4?»Nm;o=9x >.OqH'N| G ֗n.'r~/,ri ƆὪRcIIUEIۋc5J '=et-}}g"/?>M>y4:+R)VcI-7/NYP]"2O~Ԩ֝>2 9klXx67)_?hp}6|dlDc &mWd6= ۄ}b^n IlJՒ{aNm9>Qb'۩tʙsl>dь't ϼb.y"i4fA+"8/q^< 8gj4 -2tr$" m\ 2V=!9G.p EctW2tR5Lo{!Ͳzc.g>32rI]e  K]>2=OiPDl4\ƪ@l4`LdɱJ@.Us]lt3M08z Xxe`ԻԵ{?/Qt4q;"[r-zoAxr#vT{yݿ+hKKU6ƃaR.1)yg>ːvt(#ɞsoĬa^t23aɄa 7a|e_Izm 8uRJ*.&xc/`)z⭔kF=aҗeP˥(@s@sA쨬 5OD#7S9^&<=w&*8GsX'q}nRp/"bIϩQüPϢxn6L4.M;!2 yruy^&0yTy[0J(vpNN| uT(9EK\+Nq תT7\nmݛ4k٦mr}sv^9Nψ9_4蟜Nfm 7;oOp68| Yԓ` {uwnYCQ~QQIhc^Gt>gͽ۽2 p@Z8ܱ@qtQA;&;>]Q~7cz\~y3 ?k>^^pނFy)L< 0I9+wG U U"1b8S?wox.w)o7 8z\`&ᷭ| t_oy@תTߴk|Vn'|~E!/[J6߯? h 6SųXză$v5߹䙈 s (T<%'Y@՞em5ocs}$w}%줧pKH]tŽLh̀Kd$HN*iD yHF^sq\u:Euv*VOl+1F]"Vb mNwv>=T^skىsyP@GL_tuB%ku+!NƩ/(+"2Vs"*kq F[im܎6h+OGV: I+(3! Ϛ \IhhБPM&;h5xx;BEH5p!{4*wItOZ|^PKNzR1r5.דl*9837|b>[E|9U/TݼOűӎI;FaX7mAր?1vZP\FHx, B )ժ}Z|k9lr#I1jbG j6PyYkXj JH:H@IR/!A$!\^LT&$O5Zks 2%T:H1rVk %ofofXWUgD |+zpU>:TLEвm<Ǔu(뙊q3~tlMyy/_&_ ˄SKij6S _IˋT'_a+j~djiܦq\4IHYY3o3_Ѩ^N/[ /4B^gz8_!LZ{c X`< dyd1&mQ2ErIzqvu:_}\|r-ѵ]jzvyDs7S*]PRCņu͸^~]C{v6Zvruomj}u@,g|_F?hz2jc]?MPl82ſԉي\׼JzslpU*XYJ7F1Ө`eLFnAe-[˪sֲ;a-l>O&aH1%Y^5SF"56NE 󹪵hN,NC'"%S=|V^wۧ!6zqlsTG8y~{uAP:)*sh,3u"I`'ΙKp6ƪ2whـKR9H'399N2 tT90WYsXwo@g"4[zޛ]@Ҩ6EǮ6NH5RZ)%8FȨԑBe{,:DidJ]CvJ&JQrw04qLL<<##"i6jDΑ%צ3K6Բ›.QZ y 9s,L@3PFڥ(xm"YKDr,N:$dx-s^*\>d$*?c,(cYslFRBu` M1֏hǥ>xP!dM ʘ2ou@z9w0ԻA 9mمaKeSd f22&uuΧz5$ꦎy]&%'a/ @DT!hI}}Bŝ ('tC:6K˴#;=PwM<͠a;4ہdrP/ pS#kPfh>D5v7bf7ʡi7TMt9aR9>VWB56w=R}9O0>Tvz/y+vW;ݏ4b ql1ާxZր8z&+K{Zbj3cF聯ܢ {w/ͫɛTE?v_VH~er ̯-n][/:ӒL ]qqʻq&y1@dF 7`kȺM"yrzsCLr.}N\RK}V'^*rЂԂa:E GR6%YS^h14ȕJ\d#z+\<:bza*IUmݡ<5C&ͣr?."h0J IZjx1XHwk/R̢(9!  DpĖ"A85ޤDKӬd2{W_|_-cyԏ۬FGLm{ Z| Ӝ\~t׽]詜9L멽Ri**>t^d:JTaƔLӪXQɌR(T%MP cv@̒  1dEY9F:εr@62Vj\VB[ ׅ̀iyc8|<8p/٠qr;E$!uJ|ɉ֡XIJ@dV`{!EMilQ6*Zf` BC,Xl8O'q jWEm0`:sk9$iBrK͸F@FIf#m ZXaƺ,db\d5hD 1ɺ9Q_f1` "VED8  {9@) 1qB!ˈe a$18`-g&[:#oU3fl)p̤2R՝pG4/neȒʖ*jGT0"H=Y)W:͒⢬\F\ܻf4SkƘ%*27NPHQ*tU~Gw֚CUxI"mȃ8s֍bnx"G- }Ng (*f1רjoHQVMN'9S1&2 A%JWMQ$sߏ{ErR:PQ*dQ[|ͻ٧?|rK3ШdpLX[B*5=mIY.e`cB:c#T"r1aʐ3% ,rSu7[ 1*2.xI?V: bJAF% 9JfmdJ,OdK+àUw:h56w=M&Bb,voNe(-xJ5MɾV\K>fzɾ"% %c>-s<YU* 5 <'Uw,rwЧuj,j=j>z^i7Bm> oRrnu H 'bPZ/etia|ͶCjP˵{|p+c刞Zσ}3cQϷsWElP=CQIᯛZEhny]ŻY EOZnV≶9ࣹJvͧvXe(?ОHxPy]9I\FF&dKx6`S?XOv^gH}pIu(R:D1LQ)/,.攜"X`ZWhƝ8RF?/cK'-^]QZRyR͎Yߍrz>E_iG1 USY7ThNHJe@՞!ULfZ<_-"qU,KVHfk7dWD~ H`%loઈ+m_H;_4s+ɸb="xSׂ"6}"<*RWK}ڻ""n)݇"%!\!W^$0jBɾU;AR czp%^OT ֢7[%p\ b0\AE נ7m 2KKUzG/ֽQy+>_PD :f+CM&vmny3pu} 퐮.hl37nl;5QUI&4q#ɿuQ1=/;1O2Y{&?e)>%6k`"BV&ݻk~察סoo>w -`^!H|]WE^՜oS' _'z^,##Qêܡ|x`>OUx:xZ=.C쓟g*[}oY]giF!Q6Wl(LP5RLJ)O~azy ߘP~7zg/O=rwz/7p97ƈ 4JnHZLm6zqm|pIZJ.lS'ӤD! c. zeLȨ&/8 _ R"@iQ!Rp*D֝h](-B41b=`iվXVJlZV!;3L&fqyPlR͙%PE&@`VYkph!Ѡs M2Ҋl_hbZ<a0`9gfLɏ栢Cm>1h-AaK.P_V;Cq2HPP>PHBBpl5 I:S`uU6 CX 3Ys"0^*DBApu..h|DSZfak33eP?=kXP;f͉c̠ңSa9Y #N?0jqn_= :% $OUziԽ,HJc{T^Z TdN$\"$]͡ VtXg O`y<#'e>|`a "=dHiD0iy]*}ĩ83;,y*Ժ<0{:@s!YD x4 ; cC`JrC;kr]PXwtvY>RmV\[е2"9qwP6"~,EȋU}L!U(% Lgp5bL`QU/Ubp ]fcvl( BzxJ X-_U31cehhm=<3L2Б{ΣnP:LlM5s*!jPk {Lyzsp.*R*vl` v:+lA ۉ*zXF>?X;wHWY g&,ά)6Rn[3W."X i)퐄͐X` Np_(]ycȷlms(`#Bx-nrufnrn${jPC< r`3Fφjmj0P𻳩ţbqվcXsu^kNbݨ<[#4Fex31W]_.φз]UfNMs K[*tyCua#(j˝*:T1,-0 dWK{JyHX׷nכfO6Ճ'684`uEڰ $\FG_u7Ŋ*oV1 /8y¹i*<$qᣦ#, bA l5(?<ڔ6('*М}wQ-ڼVzPҭi@d]LzĤBjR  tM%G<,{Piᒸ|7߼Ap>BU| de&2XAe֚ .{̀~122y_hJ7a#Et5jQp1:a&فn…0 6|Xfs˥ o&P˩XLԤwV|IP^.X,Z)`ZX-. v3J7B;n)D0B1TqQwgQr~~y(ٰB.`v $._{Wӟχ{(Yvar2)3L9\-sO?p_t;xxwd*R!R 6Ra3Q9ט`49 iHs@49 iHs@49 iHs@49 iHs@49 iHs@49 i怂&l)n'a3gYOb>^chHs@49 iHs@49 iHs@49 iHs@49 iHs@49 iHs@4js@ >Bsf;9 %Lh}8PF9טjHs@49 iHs@49 iHs@49 iHs@49 iHs@49 iHs@4Zs@"eK9 ;DN9PF=菓G荁LsAs@49 iHs@49 iHs@49 iHs@49 iHs@49 iHs@49с9G棦_?NuS'~Κ򱥃^8t\-V^Pj+DĖHcKOzsvCt5Մ+~+te;]M)]Br.H2+7[Մi+t5~i|(%*]FN/gq}|;xs溔wۓ{~uWO魡3?|8ֆ|ɹtbjޘ8'O?*<}P/|/|na?7(dLa[;J;?_\֋ooV돻 k;?wK9|گt]wP7_]]yT~f_&>},?g\w0\zVgO띢B ?(ǭo7ҢpLI)s&ynZ$1O`Y ,jdW|LmĞ5˔]·F #,a' DѪcO8DEސ`ri3Nne'7цx(Mur1qnCtlg|ujtzSM,\6v+t5ڣ'(HWDҖBl Մ˼hDXUd)mf+K-pC ]Mb )]BJc]prۡ 7c'&c;ICЕ|jޕ_~h?v(?rͼ/NW_]=LWM/HAh㗾DWtGFW|]S]0q ]M^BWmNW@yn+.;T4I[޲EWMٰ!B0s-\z7C±DNY5t0V;mt~^Zuʅ M4yݦKB><]8vˣ'zӹ;ךI݌˙h7e&J?_aS] ]M~3rGOWxWHW~&+&- 7ƭ򗖮ϧ%JW_BLlI]MA6CWW}Yt&ʠf5UL]M7CWn2[8;8z(=+]BJ)ZN+vq;. ]Mtt5Qt Jıݒ!jyh ]m<Ex5zHC'C/wuwC+/fPZwdwt啮:L܆ ƃkՄU;m,+F؝m@m1m-|ԱSmZsGWkhnx!yH~P;]!/[npǷ/DwS-9S7}!w|Yĉ""'L[_ܶwڬֈ@%#oɝ`KNPŒW K PΦ Zu\JAuĕB0PͧJbsB)JQeNlpre65W'+͈9]i|(\pjePqu~$)lD6\pj:P=NWsg+˲P|W;4}ǃ+u`ѫ!LGk e\ՓK[^^ pZH+آ\ i26d+k^ZE:@%S qۅj 歯\\On{ S+;+TiSĕI-'V{CE97Bhs_|+b-&5>4-$Z}ܮ (W\Pi{ePVKNoRu73NQF]n]}:׸Rл5;|dM? 8o:;_=P[9,_Gx_Loqi:7x"쑠eW ={٪2<R,ݸ' ҋ !13\@Y쏥UҐ`Vrkŷ?J.l 4-%=Vo@ @ \}BP'w$70Oتg땚=jMմUE;g-an֧4Ìf䪁`D6ȕ24PKJcStդƐp-g +T5TJzxĕLQ'< lA$ JK pQ lpjMA& HKXNA0#m{Yzj:@%%wu3Bf++@mRO=>\^FqUKemoNUOnԪꩴ 5p{\[TKbMFlpr`pj:PqubBʭ`TL耪 j/*ATsUj'\v/Ϧ[*cSsV!+r;ba;2YLћl7a02jɁ`H6-9y.-9TtTvmՁ%&-9n9`U+T}\ʮGMp% Wg+[B'4ju\Jq%zuQ\#'\pj_BB:A\)c  Pn>ATLq*upȜ@0%&\\js=NWFf+,3Pn>B}IZtN |BQ.Ϧ*u\~d#•9͐ŸZyU= ԊTʎbj:ѯ>#Q\1bZe,\Z+T)eSSQ^0z!jW#ߊ(99pI5Uiͩ7HNP ߀j߀*} 4\ܶ䭧V Uq%%XI P նe=9E\)Fi SPƻʮ Tn:\i  |pr W8P%S?fՎW(W\pj_{]ۭ՛2mN6\d++U(B>\ B]= Zm{sZj%mJ1\=-z !*#\`)U"\Zû+Tٵz\ LҌpyK:\Zy U޻:M\)h*TôzVnuqOzl"U/c[3;=pjP/NAlBSdK6jV/ 7;ݸgԒBOm{]zj5zKU-l aLV(Xlprm6 Uq%%e+,T>B+P+Iq*quRRt!Ƴ`U%7\Z!+Tٵ_z\ d3~5D!Z%+TUĕQɜ(زlpr9&ղ{WW'+Ϫ1j:PƻR Uc$$CV lh}W5v'T5Un5%9Wk{\Spt)P,z&nw:|ӳAX!ge/==4yLffNhH n/_~ h;#tdF,^D%6RV\G_^޵etlj9/l0rkߝ;; uEuf?GA:x3"DZMHK,yꈨ3L>|5NWof`,/ yyXUߡSt)Jp' t}s~rljb%BU'7gSy1IF *hITFnt[qw8qK3౛"i.nŐ`x旎IuSr))Y2C7Ԧi1KH$FKH]Jgꡘܶ,e~Xp̣E&ٺ/`'Cee:Cfk:[o&p9nŊ2U+>{nqIy7?Vm ɅƎϣo 76m4^*t5Բ!U> |ͣWmwtU#GVTΘ^F]0JP, ;Lr)e ;)4ZQ ڂ`%E5O&hI3 ׿iy/.΋UR_gcVr$9kz.\գE|X4JfVP61 yLdQ)w`V sUؙΦXyx@+ l|Z"]U{\xs%<;/W"ⅸU;,FIhx/S_Oe`URB1.}/*oy[@q-P}tNb-X$tX\ϡRP9 9&ϷSf꨼^Cb$bJq>09[}x[7AN*ZǕ~,) F/lrk<t!w/)%U pֹȜN _3Hn ڵ Aƺ$xm٣¡qc*EJtؐJ-1PNk<a%x \CQG8FpH kU.- K,cJINO .t)2ڰ#ܘ7s Gn  W$ǶN@%9P;Ftק_vw??*Cm (Pi)L( b4VG/˜4ޡJ[.{u(4LjM]J1Pj{csTLLZ n2Zn46&ߧ.H_SSSGt%gV;jQBXL\+Sq:WMxb]*\=slmFqR)]T'ϼaZa@ ħH 3t $#[bʣUeױ.1/ Z,#e &&Ht'+LyT1/:֡:6K{ϯV: jmaΠ;@aւMuݷ{L_硝O|8Gs=m7 ޏIE[g isOW8Oԁ|^PpndN#rc( Dhu>i୍c4*Z驎9oNok+M2mi7X1ާx߀8x&\E\Q}U˲ad~cq+͟AӶc+o_~g n}morn=\j\p(#.o24R*T&C9;uz˰έ/: 2ˣ?m T$}Z|I}4B[h8ILt;k;ԏR~ۘ*MW^}O`{{ͧTufIcdDQCWpuX^?yx n9oJ,ĊÓz|ˉVM< +l:ݓ~<bP=Wptj:)wXַjKDF`(Wex8}}>|pPI)?-ẍAUJ(!Dbv=.p9x_,Bp (1dpp8o&J;`@F~Gv>qfC;>}  f4y5A?{obo\]gZȑ"ܗݽIC6`0,p,yr6+Zd%Yn $f],X"u)+z((?t䑵@`cr< q4"cIDC>!w&JxC*&hn\c?H&&Qczek>Dx# t:I=`T|,O?N*(4|L$g|鐈ĈC%bQ|* edr35yP[.5Gԣ6q ԡz]t6^y!r)t ryr1^Nb8J>Z/.G : YbE!OBMbWx%"!:qSAw\Cݡ]N!8TAh* LI:!gN.1a[E3%hc)DDAL&T<%E!RD[qe9{^; T?vI>I^}}{ȗK۬6Z~`{t5ysȩ-}ZZ^gƉHQ\^'z55p<Kptb^վJvN3{հK_޺T ~zKʤ&hgs8.Nr:cqȋX"ϥ!Hy*'p™mΉΜW`|էhgHFɕ0:z>X&2@C594T 0F+Hj'KC=ӬJOOO2n 3_/֮w s=vC>[dr<]2㼠1l%,FLk(%pֱ$IOg< (R{g(Z *"DI4&ZmH©IS.6Q9SRHmZ#gց)\uEyGX`{,sr竱+8y AA, ѫ<]ZT7x3x2jE{>v o }3M2xw-P 2pywƝŨ]ߞVe 8/s!Y]BCFSqޑuuuu /}' Y$ ɑHRʧ?Ŝv"&긧xnEd ѤV; T`7TG#k"X\牖[i#1HHmWtm]BU.pe/2&k9{~owǘB) (lr`p?OaH*d60)ZS.3U[|dlX;aS"a /4 ԩ U'NGҝxu';;NN*!p8x…0k EpU9W2@ҨDB%y+N?>h%t&E3ߒv,܊ϪW|P|`l}^y!G1"ώsik;5fzx 3\!9 M/A1MϾI_ۤMI1 !1ĈOF jb`b2*lw ^D\Y/scMaW-~7z {3v7f<|Q#DJ^H XEsC uIq*jpGpJUQo(JP*FVG#$K3H#^E<-ٽ7/`&Tګ]_}em56RzEYf %F&X«UT@jJoG+od*+o|7??SjKqc6b0H("iJiK}o:Kv%lztttx}ܰg0u~dd *&k8BFGV `τ -`|0URl0b}Gn{JyhڢvU77.̆׵Z#;+ 9>ZL e (@2^1@yҲ7G"d^ٸ `x`9<0P'"䨪E0Ƃ7T)ǽJNOhG]B}.TzQN\p,~,%q xS'l(5;xui4WgtNmJKE TpE)ڃR{鈊^#x׳aEO%yL"h )O>81LsQ%=i-628Ž/-dGDЊ8}aWU h%\ZXem$[P2%d7[w~7,uhѰfm v(PԶf7r ĆyAdKx !p#BQ/ސMlR\I(~\XzVb)ݛrv,u8Lϲ՛~'5 y 3ɪ_ E!UR =U>O'uv<ӄ+?~/y>8;yU )oCVҨf@aC?>tV|=Q-d" w;\':obHƶ7,*՜<l̔/㑿]f% arNN~9'pǃqjǒ~O4y/#Kׄԃ:Z}n8*r6v\oLW5{gKnə)܏?ƻr|/rݬ#9?%yԗpLF1iϝ6H-sZT FRSeiphV ǖ %o:4%rPyrۇDW>=ϛ _}m9i:+R)uvlޒ~Ux(oY u6 J4u sjnV=Wk|:6;x#]0֨LP$ɨpzխv5|ðMoOm9g]VȪnhVx4kVYQ3mX2zK͡3\Ɠ\KQVgĈO^1<4VըVD"bo8op^vC|yB-pbgcȻC@AZ%eLQw62$%vź>]үQui5X3mM۫M.1<%K˩I}"IV'cIF'6 u v,o{Kip/ULyyGfl5S.ҵXޏ\ϾW?]]|vTh#a<&%N}c 9lG6x$yq:!ft]-E* 0h?ӂA`c1OOT$N: 5Fȹ`B-ɝ vꣃ;n#ӡߦ3zO5e'R͡v:$\"p4RO;`>9i#(kնN 4.sEm1 C؝j2D&B ezk3,ikrڰw)B4 U)kח3S?b]Y{|ыÆײD{'`g'X;6wZAH&V@ A~Hv3cXsc+_~|S"Ը/ ߕdW1Pf9)A ==?ִ>"'-j|{BuIϿ UpL0W < ݅t)z⭔kF= /' ˥( n9oӇ*7/0!`JT=uKVl T#LCf4s#`8LY_G%?1r+_dCyhlX\.UU!2yrYRGrv~)6/Ýx;(g gŧy,ztvyuU#rO#p3'LPr`7W q wPk#[ͳy ?Vf_W~. /ޭ p |I̥4b8Y0w3b AF|3,vq| iIƞeS7EYe_cL+56|:.&st9m;+#{] Ʊ{ye6  dU$9Ml >'t0[7> #7c?|z; ߍJ>o^-8$ e&AXRqʁPΩe9G] q鷟~=}{_)^'|/q:i$<_DO AGt[t ʮ¨ ߧ_^/C%B4"`@kY%'ѣI|zg").P<ԛ%'Y@huL2CmԔlד.Ez>wКȼI"RkTs94$Qy'uZ9TDk m+J{L-U /{'/z+L=Y%""J~A#o{~BSzx(A.,K/:=$&` qA[yc`bҙVgpb#Vm{͚\`Z rL ݽAXuN,N(IH1˕Wtn}_ޢ/Jb5[gp1%4yVKSInJ[*`J?xlm7|@U7n|WF$f4z7q:/MZu=]DҗE4w4z3~$Os56L9к_L2J) TDc2yNJkY!>{M\ŕl򢍮_@78d')GI{ɱf@2Fϴ,sJÃ&GL%mf1X62`M޹ռX\s^KuBT;fЧɢ&SE!h9ūl vCgvnpZmc˔oq%oX'α>c3I8(NuoCZsqS[xiy!ș^CP%MnEwe #azɧFD!Mh[ovl לdMޗdߪPi"/0-L#Z2Jz D75t,K!XYPH 8daJbu-;LIZ/})IQ wƜ^\%Jjv-pҊc<p$JjRNR2mM%V]kQ0CRz*`zѬCo`p wd-7R$[ aHQ4'W }^,8x E0,P_p ˉ1%}Ү<0EXإm :Wx 9^vS&z+u0}XH!K*{-uhESlJ\{-%2Lt aQn?FI|QxR|Z+s>ʕ >gwa :~U5*揜G7/M<į&]/cn[W,TR̦P-n[oH]ϻw̻ǯJϒM33~ a8:Q[G6ƒ E ,ӝ_wH)zR4ԥhJ i| PNdj flDe@iKR 0!2hfqh(Pʍ"6БaYqGkZ HZ.EӚ8KѬ%ٳ]m66pUm^>JsSH? C&N.4c.xdxƙ9,u:ss'y ǎww x7 M/w#+%6j录)Q521uT-1rAp*&XrH# ›|0z ܟ%ZH0V nw&v]ƥ|۟\{4?_x}NziV`jwe;?E_m5bĂjL UJ+J=LjP:іG796;oՈd p y }H@!`#KU$k{5q8]׮־w-k8EN27YYJN+g[:xM=Hg*U&2wm:7?>8B8!0"@8"$E뜧۠wNk:| ޜOJt} vQqn~f>r>_`*9@Sy‘TsT3,3 Jq!r0tp`0rm%aE$`@6i,eNhE6!.Cߩ Eigq n/q5vSHWWsZq^{eS OT)31b'$֘c=+R(%Wr/ t耧MfwZsydž*aWBQéV:Dw2ILimv(ƽDRHy@Ti-` q!TgFze, h(SHcb0 lHS b!, /:=\XX]m8$ghP;9Em3. ܘ![dMOiJ`2Q2b,9sO<*(8_`Tpƍ 7:#`|wK~qZB.ֳ;Eز4Ʒo~X*}0=L_eBVZkǵpXyQ$N_G"yQw/$H18G?_;1b:yoXk )OI jUqрdk}& CR~3>I>-1ݗ7^JR|}my/C~Eny0WXxջ+oGD@'? {}lI!dSigyt6۫'ݕN aaYCnG<_'xo +o K_[<~#yQr (W@! Cak1.&TSačGU dž5C6:s+ߔtrB1O#Iژh4H/h)|2ӆ̫wbt}Myzd. muI5!$*g֧mg.s`1$PyEr~fOɆ*ڮދY[f[>1MBM4DeËԽ|aue OBѭRWxSfhyv 3C[|Ŕz+4c6rK+TS*Wwi]z쵲2aDD ܅XGXX3OvoC#ɕkb$V[=lQkCQPD'w} ˿fEѦC [CTfFsicpI6$w:{e>K>BWB1T /;[iL:jF=XA#qw;KGJsLZ][G:&LFe)A@|zFޚ8 9M/hkĕhC%(IB4ąLy\O_LE.m|fXh`Y<&~̃1z1f!*@JRDQc%̇(A*Dm#GEȗM;$/! ng0`_c]l)#8WdٖdYn[đ)M>|XUw9ZNgX:>0: c L6ȥ-ʶNZF3"9I*3H}V'|T96Rsa7?Y 2#xe,OeGNyG*E.dD^rTKVf|ӻwFigRTHe wP% "%`Djw"⿠RNu(x: "2f,lpRk^%}qA(w{~A9i46F"8ާs}01xKg֟Q 0B0]D66VL30 œ\Uz7 =*|(~ 6gZN-G]V q`#;zo4KGn:MghzN0xvNm575KNzcɻĬkŴZ\e.č@ḶJqFuHOocW"\;XPt9j(=%z/꫱ŋg/_LyhŨ(lS~cHrc'Ǎ4z4+=hZ~mo8?]xLV#9ȅ0g(ߟ4OՓ]2@tq^[_ўKغXےT[:[׌XߌkY*oSk7e%N/zzlxsKfmnuu+A˛zϐҰ$K8%9?qcy_YJMa7u?]jNCq;xնbBe)*xqYgs8p:*nck# ]4Ro85a&_~^ۛW?=_o^Zz/)ruV`~}J>@[MѴiijo4ܢU^66ݿhWvY!AFZge60׹Ľ$z߸JK ` Ѱڈ쵈D% =biL潞9T%:3(@jIh(I"U.)6duZTDkcbX=6b%v5>dwM;7:]xVЪugO;VZpݙe8.(nƓ:eTlwK/_xV|)E^>qӷŐ.Ɠ l{t)?$ν |%%x4/j4RTY^gj_g/?gbBmNʒ& j${fuWeO廹M-֒¬ƆyE^ ~n.~Y/݃; _I/G9 qIhuTB h4.IS6;lg׏i899MђaiP.+mU-IB$9< Cźs,߄ϡFs"#P.ү-[ύ tD: Vqu` hఒ#]4sXܯc4s4hdLk aCvQW{p3Թ6g}=#(Qn*Ryn>)"/"*U^K%Q%Mbr%CN]zu=-Af-$  >R{X&<+ɒiB%-$m*I;WI-Բ%HA6jb~,qW([&5}Β9 u)I/kCؐ$y,8rT A>uzI|[Kβv˹Q.=;;2\QMsj[i$ɴ r<\%\%s +(ᬧ@ +d7at.P&ab9%Ę"xVqRR8g(7hFG5E\˾UvJq\RdY)_V%%Yf-%1XC'&s[nvH]]C.C;=,o3VS詩חO_rBqla䣧{+c3jGU|dgDk⨵A}h: +.)0Q{wIoS~tg,St5 eY2h9;ZoЙ}p z3!1 1P˽(L9،Dr 4ٹlE铊8oU O)F2`t@/Jd^hiQAAQ;Cv'+!D-ccwbjoZ61o/=a:bGE#}:Va U+鬮7:r B6YB}}Sw>H@Z|kZ۔v:%ɐs0vۛ/4x4Sd~!ah0mz5 wo.^}n~}?D3qߩj8=|d-}k}K 5ۋmϴĴ0ʠDs~8%R֘gP")}o0&~*N+X+^>Dp&&k7ah;ȴ-q-$-:M*?zeWh6NvLJH1;e}ǚGof4&k倾ٽ;j*x:ګ+[qibz;[$E&lS3EdAӂG ق-1 Pc]dhr{"1=)`@eb^r1gLkp+2Tؚ8[8̶UZFƮXhZBcbBҷŒY6q.1i%@X;=>NϿrNT5'LNDťkZ!`9r֮ Yc(ʞDMЦThd,MRf`!ms7b(.]AFǮ-63M#d7AXԆc"&TPNibHg@&0YiGe!3䒴DjP$0h"˜YEHNƹ5qZ/NaL]ѶG7rc\rjGLgC'y$UDӦTC4R3>4@!m&yOjO8fKFɎ-' b-Z,zhD3MK,s>\[듾G;lwh~[xkz9=Ogy5 Knk(j-zg=9kK-CX,U)-> 6hWY$QI`F* ҩ눐kRŚvO7$1p} ܝ1p7YvÉ,=RGT^WO/6J/avLc%ʃpL=?9.e/`c֫P\E&2c !1\XanǍ̠CҤ"%1Dj$3 VN̈xdDvvjĹG8w,> toMoyӨ& pnטo>{r+E=)mlJ4{D ^@>3CɓȔ6YG)D '@'o+w:4w=ڬ7u5EȤփSu^tV^ٓ~cU5ʓ/-h0蜷JJڊP2Te08M^"E"E"E7" +:pP(3ƌYu!8* GwILTۡp2{GճUxsrhqdGWKpzDbп [/tRjJohzG2Io~uTsxo|Ugٽy0779U[^8t6F7^ yEM-F7}U)O0>܋W~ts-2;ܛ$P`roq8ܛ$-0ܛEJsoaWWE"VWEZUҪAB 4tWE\dWEZ]++ C""oHkyኤD=\}5p%|∁' +wս'.I {I);f+ծH`)UWC"-bኤTLp U9FuHJ7 W(v/# @wB\nC/7j1CTB5 / џ�n07EVEv *~f ,ƻy%CsՖײgݨ-[}HO!;zsxYxזK)޼&dPro[q>T|wڿF\bQt,Y-oy3.Ϗz.=&Zwh?ɧ&%cє[.~?m+xWG}Au\syo_cz q~Wfu;Or8˒l Y5)s[ ҆ +6SZ~eLЕd]c Zt%pn-?EdAyDUz>twopЕMyDWWڏϱJWϐ9miGQɸ6^'_ `!]` ѕYjh7v%h]~t%(!]@.vA7vWOWR߇3dI_~^-}etu? KU"OtEJW=?|0'n̲DpHWG:>9>4N@tmY 6![InKc %?oNnmirxG?<}'>J)xsI]:z7L?~߇פbv\5m]@|?N@ɩn1(P:fh;揹ݲ0 f5}u}/(|{;W]~oH9GyiVH6GaC^P32uJgHņf 1~Ñ7Ñ/~Ƅx}|?9١i%Yl3&zf\<z[D s}~5fWKekB[)P q>Zk$[PC?%Φζxg;|>s%k]Du3}yT.}Y9o:+&rw@*rcɰ\13gI!KKwAc2cs"!1o!o67gPtP)VK1!{pGD[3gv+ M=")WgF=6mC:ølC1; ĐRmY9S-X89f0Τ1% (q4vNUx_)`L|cN]\ۅH|VBol L0c63ef> :p2>CБ]V%K5BҵL@hKU!İ蘎'$W".B`) >@ E&rZgd^0P>DTLr3P&Dt_u5BV. q'2CpXTuqU% 95[r##V1"xa;PMuh 1H_2]&???;9ׇ~4dji&'Kе2"9qwP6r,EȋU Ll9B-pu $$4(Xsv,+m˱T ]cD;v@BBzxJͷ+Xі_U31buh"hm=j|#9 gB $"2Б5v`mBgU酭iFJkh@2~ȃ!:8vGyPq𮣆OEҪ2,j)cE8ag4H!(D˝yXgo>_X++,s)&jdrԀʬvz6R譛ޫUD23ߪ$솲 |v|%%C%/. Wq7g.w㷫{vǴgsT&&P'0 wT4z1FEqO- ~w1zY:Z4'ZKVP5KFCo b<)mWyM <%2aBr%׎lzA3t8(#:)Zӝ2T1J`[Q2]Yڃ&P3@zOzR {alɹO˨acW]M!;@ N\+|K yf2- &s@PmB!~]S-.l <T},y ynzC=*mP zxy ֻ .-3u5,(s!~Z)AH`k|TlW=z:JOJ(T .Iv[1x@A4L^}J>\U'KS`jCv&f匉^C,SJ,Z;)z+-,-~4~H_ !BPqO!j)C`j?.z6OVzvvyp7c#K.G |cE7[~Gqի[PK- '."2RoO_ٙ  ;t+dޒَH6'@H}@eA}@R> H}@R> H}@R> H}@R> H}@R> H}@R>vK> ڐp7t}@1}@R:H}@R> H}@R> H}@R> H}@R> H}@R> H}@RH!n$pيH}@@-}@R> H}@R> H}@R> H}@R> H}@R> H}@R>g9ޒێpیh1Q}@> H}@R> H}@R> H}@R> H}@R> H}@R> H}@!ѝыWTZ\?no?l6Fuޝ^>BQؒm =oǶ6c[蟼m (Ws-}0*3 ѕCan3t%h:] J=KR?;I<'%!wL1[w/|Z{5`ȇ4t^w_]^eugJ%F(לͻ K~wy|oهE9{{&)&|wڿF\U?*b߾)*󋳓ktobe|e u%~y75ק7׍y_v6v^Ȯ ]=jWv鋷 5]{x &7vI>@65_~$'5_]r-:^ P\oq+N.U~~͡aytDĮƑTh_ٯfYAs%&ybs~6cqyl^ZxLlZy~ܣVp;&w9z޵\#Qn] ʳ㘧Q1.>w,7N:#9 ՟ i3'FhsO)i O`lغЕ, `:] ʨeϑ*E0g:xM=HgY*U&2Sw@D0OF7'r|*9Q:ԴߣnT;oO:\ EŜ w^>k3ߗ<8xWdfXy:/4(5)#V0(TkeLޭa]nq-!9~x6(1,;pϳ*fg, -R$+Yы*Ȳlb>OiӃck!_q2x>H 30˘13 bd $2*7}Т>x>ݻ~("j~:ov:Ry?)X|ڧE/cٌ_o"hYZiY#Qa<:Rխ(Sh)tM^ lQRN,q7,e*&2>H`!hnb\+DE`?^%1+}drSRO/WXμԦ{K{1K]xjI{z|9fx tEzϒh~~Kc2e-x~nCDeʗbk+Q?p"r"5t->:& y{8P)k%wQ*8 RC HEZȹo8KatIXRv3A3RI6u1p^acmky*CXMf1%ߴДb~yh59B^8K UZjo_0/vԩP@TQs{ P*ɝ=k}2Y&x8l:{M: ωK:/cKtū(T3y~- KU>R[4e.S PWaУ&GCB;r/vz+[L.,2`1F P lP&Rt'Ť?wgxpim1 K8r%2Ѓs٦O#h#NE0#g(hQ u UPFaAMk5Rlhsl6Gw R>>.Ke2v%SOaJ9Y'>(T{Ww,~ԣ 'y 4M%)h]KId[[Y\ pT"sx1UW5 n2%5rvKF t2ŦХų'+_߷ȶ[ 6EaQn+z,tSS*ݐVɤ'r5~*@-AJבӃ%RֺOC^dn#D^nJa,J > gD4n躯Z?5-D5h!Fh4È e8X䋋]yj qGxS4TzU;ؔ#j fn?^v~Wl"L;~oluB8 ȵsJ;K48>#K)ItE%zq4B劍{+GP QDbb .'"P ueNhE6=Cl*`"{k; 5rHW`ߜn-%:WzwE56Vh$Ӭ`ˤ _g)Y~:!ȏ+<=Rj31b'$֘c=)I+G d:3.|zUOy9ȣ=6T ýE~V:Dw2ILimv(ƽDRHy@Ty\7nyT ZQ9+cIaP6HPPciL@&{XtF‹o'/uZ4 .ڶv70=l /ēWv|>7ZWW㇔z}KwKwQ|{Mof)i~b?V!rwt<ց%EEg ʿQ@>&Iz7`8+"b QܞYk]A--f~\SlD!Ww"W"Sk!ɋ^#0&!'A(q6(Uk>/ArQ44ߧ"]sn[ e;9i -n8Ρj2 |.}f6^~d,L$Tfj*.[wh6wx3`kFa"jxdxY`UmZfEΠOxVld͖j޼ӮbhND% zA8p}'?J2Km4`"r%-8"A.:IltHp KyCn)w,F{7-Ap(n_T&DhQG9 I,,H!k [,|#NJ8YʍƁ7>&qCꩴ ;\6o/pP*n¸Ix" B*D/WUN8b؜d yǠCTdR3-8A` |'xpTίxGJC Lh3͌:ӊ8l)tLxzg Dq U-Unx=[iLJcS^Fn#=XA#qw;KGJsL+[Ry]vs wᵬh\?r n`m Y%5G "Qb{ !""G QpC`+WxhԸ/j,#)T`FfEaWEA0`@l%C &0i]2k?ָQʸË}2w Aa\dh~!R{ow!H/)ݑ!a&R`JuV1"%FV@P1F2a>ۀPoGdTaK>)K"2CC6xјIXy ~IrՇtS,I%)t/pTVSvںh aC SR/!2!G~ kw?unvq*$$nX?4ưCLRc}פ O ٻ6,W~q%X``&YO0IbF=eE4m9[&ECdKjZe]]}ԹU$Rq] ycr*g0;\(qIuDc;@ZjM6Bvv51ç3 *mP=MF9x .ӐW܍Չ(òg!8Q$Wy7p(h5)Eoy&yqNc=ܶ6N=8p:blϲ -͉pLNFM{ܚW>5i0,-[Ehj[H&&/*B[h g@ <!;($"*9q!qz^ q SbTrKF562B2ʂ@qnzug̓&;Vb2;$kD ]ɲ ((ƓF M,+i$JU: g#9YM.UQ3A%rv&D-#vklGl7%+u˨ jì iT&"p\jd::ȻQXp6 {n$ie!#3ZQ!|di/8XģR-aklZ'Dlm|싈eD4 "n@)}Ȓ1:ppSȩLT#p9!-I!/P FSBn9 5 '!*Eqy3/Jxޣ&@hym{ʴ&vD.vy.u6JErp .\ܸE"GtNB** P@ibpKdT@jlkt싇2ykB8 5Lz)vN\~|&G)g3iÄhkIWRQצB 9)gggphT(\D}tOyB@bX-=㻤u+UOkPיanYw=3[!76XGl&Bε^ } t\\=vpE$~p6f]!$tj7u5^yFAPRa<mo~ws`62w%\M[*6 j|cayb.]O}Z)釱O@ʫ܊n\,te|;#YAW PAWq_ ~6Vg_g)r7xvu®K69H` ~3I}Ss=?LZȣant|i?)Nq^ԒIQgڱ"b:?)Wjs(UG=m}x]Ֆחu[^ͭޭ.w564hwۮޛ٥׵}MoVYsnvjx8ݛ{3q=eO{=,{7-9h݌ Q!gsO/ ?^3 cq4 I&]HåX&2` ܙ!N jiuX*7F+B|v`䳵#+^?]&=~U!ߓL׈W+{h;=qD(0qry,YZA\VOđ;@i$g ,a*p i%aWC WPjOO%\e9Jz,pUpԺ +F-% vy f(W9ݒ[+;` \_wztV4`ogDEgϘ3gT0Ĩۧo7lz;䯧70߳a>Fø.^|bnD9q6U?|0__6MV },75,AlP4S+'|̯?~/-$ұEec9}ct 9,1GP\c YZ&yWquDphh p +PLc+0}4pv,pDR*Y%•b) j8~GQ5̓Q`S66f.J=Xg2hr}1wͯË߯Ggu6>ETmc/I_"W,%Po\; ]ǴwͽޗV;oO{_4ڿ~3opH4m0b} N5p6s~E?f-- AvyƷ(f"b]ٞo}eMfB {/ҜJ 5b㿰oOz?̠ci^mfd3ښjLoƚͳv=~BI~T'hY'غkeq<5?AC)դІݚSF5;eޕOěu`tN'ܧ]j{W'?M?51H"`98. iK 4Xz`c%(G&FkPj0[CW//\Yډ뻱_3:YRBUdke._Ym=}u]˷3/;M&,BsZ[f))+% b)}HIZD%kIZD%kIZD%kIZ8'99ynnMb!…=b_[#T*%.c+!$HPMWԂo;W%j$TF ̊hXQ4#l,axصАc^kM'6DB-Eu<*`;c$:'-*TȬӮD;0zݝqĬkNX8\츥et9~楴PTh(G9jawIG̮4hy8)Jp:Pm )|!vM;s;-w`s`-r`p dLC@]LG<$| -˘@(1(*#4RnVa׌ <pJFbP<5 #g7rg -g5T71qŇa>4ĎUSALWf8^ iRQ܍8I'.M@ TD"]!R° 6  CU:*ZUE+*B 1pO,*GAs'N<Z bH*K;e#VKŊnh{ gd%}ڴ9g~Yw8 ɷL\7r|t R6n1͡:#3A4??y_(AT!@& hf6uQ(oir"iGsxTq{` l_޻fh5]1].6{\<^2%D˝qVk*2"jj#MFENrth6^ ,E`*ZJȅ3Jp#3):42 ;d5Y_4iysW!{[u*ef,H4L ey62Ǯm\1>VSb|l]1o6ceoZY{U^G2{T^]Ѱ`nk!ҫȀ v:&g%'ѓ`4Q-\93WnJmlsL6UD̼v]Ns5 515+CsT SΓMFT$:q$B^2C'@08a#QR T:jX )b< !SP7wn 9\3OSDpG @ n(';xvW @&g h͒APlO3g4]Nl= q F$wk L^hF c 3BMHi,/.A^2=R-AK%dzz<+N(IN dE" ڐ!8"btT-QAux91wLYW42EԽ@QӚ]& &xfGb ̾r15TB <eΰ@CanS"֫\} P:ɜ1 wbb|S$zl/Qpߕ_ Y+qe)Yppp nj& +UA'7"̲O_[r̮ՏBq_ 7'RKJ7UvADz9ܳ?Og#i4$=3~{J/a;5wSկ>_D7{,ܝnpɲh&䐲;^ +=Tss2S> p%l)w t ( ?K `ڱtzѻ&I7EIkBA_ڟ}򮦱ןhSlry_kl-Yؒiicc8QcyGw\͗߮qz.NO~y/jlq_?kHJN'DJٙᆿ/ǃk7Gb~wd)k b r٭#^ޖI5ų,ʓb#NVs2/]]WU`|RPA(yZQsi2w `6oqΛtyz㤨w\z8m_My90T Sk@U;%|̉nj 8$ivI>uuȃHh`D?H#j llM?V^oq!k! dզB==2M-H^[#.PT[+|:6NwKǶ7MXP]$L˔⨻0$PTV:!Q1HAiBnI̗sW 1{(0H=Z٪v@۩0OJm|W*CJ˜ !J[X#IͶ;&'ݙ︙l :؆ :4akjj{X9-EڹImN''Py@#@QA2*7VR֨m(\dsM]18cw 3xeT!2i5 >`I #gOa!M[tX˕Ԇ%LIQ(+]eMysǙݕ'nw<>ײDw'`M0\':9h+ )b4',A~H11))vnZ Ed/<8o Psi֒'9 ‰d,ʶI2BT\fB>jqrAq 팊~zf+!O\L{G?)I|"'=5#0ʜ xEHTdOЭښG߽RqhBuIB:x&)4@$c,FN:Fj_1E J0 TA.āh(;N9Z;sroj76uF~;˹UeLC{?R(8(+,'eIT[QfA,[ Ɔ꒐íP ^ā-{FSD"KĻɕ&* 8!}xWZ%Ȝ8-T/f痑ˁ Rc])JY™a~^)NgKΓZ̩\O`Fr Tj$Z2ujk4SVjvQ_x FP.\?4迿fۛ]2Bz5+-ϯ/p18A'6CrS7ls7nfy9vi8*eԨ\;h8OempKtWF:^7E˛fzsHXq4?Jl >'7,w> }Xț򱜟~0?y=w*m}#IPW(/aWKڃekE[p_5݃oݛMw޽|eW߽%2FIAHnu׿ޣkT߼kn]/ }MyC?rpJbA $|cubVĽ$zT=o\O&#W9b050hf+ !NJD*Imp m2:Bc&a\(\)"t2s!|$nDe .JҀBw:*V5q`urPyڠsyFugZ{ LN7hȉ'qptB( ,@1#cq9<'Is-߄2§43 53B~4| DChK,1B;לBv'-:)$ȖFhz%߈DQ+( !@ڐ iLg98BS: !Jy(M\: @yjA# ҹY 6EvH2}T*XEI&Rcc0$V'H-<_5ӴGDŽ~)"NeMJW `%x" lE#orRe7)7p}|ej,FXtk-'ظ9vБPc%M[Tƨve7(WU֦`4"t !ZMH,zp*:ιr6rѺi=<`$ !COJ˃Zy.> O#YD|$#KFШw cU$ d[$[$_],:GJ ؒlE,L0fç*h&Is Ȣ8*"%g/53K36֝z#dB.&vH\ JYPɠU'eqFx]Ach۵f:UI3u!ߵUq'U&DXJ`صfו8Ov6S]MBͥ/}Zw6N=j=UfjCh61yզYdzG5agˡ&STbQPMWeQ7J&@tyJ m6EFCTB=/ HГ#E6+GJAos2ҀR/ R]#cgTf&7fqgٗY\X|8@;'njt!"1dH L^%-5(;I +X,rRttAA UI A F`%[XYΔ:FydVT̾vgcWԦQ_] LTuFc%ei )BG@ QVk:5YH',X,c D֢#dXFeE0q5!Dd:[;y[~c_+" 8 F`5,%Fmu(Q€%L.{ILBM*L (o"#)xfNR;dk$ZƙB$yH bdMZ0vL-s2wu`\-n(c]P.Ѣ%2GtMRR$4^In}CI%5V7bp {gWcW<ĎpǍ@XUzٗU~߮[6x3։p_U7A[#Տ@ZN+4mNڟoNXj7'?()6pp||S20kßXejν$q3fE+շBa9 9v 3?3X^?/%C 1 *hfw‰`uky!oL*GA NZזBT|qσ sH,"$H-jt;RVTIv-w^ø]HO~~YҐ}MT_]wnz\yzoHy\mvf)N>\C]LFxtW[GW#(]uo~O>xT!ZysW>A5^-bmej_\L;Q&tkHY+ `Yہi"BG'RhDdOBOJ)'-^Ny IZp4i \=MJ0+\iJ:# \ K vb)QARZH+utb"J+o[J#/F"}h:M5Y7ȆTX~JCq זȯ u*~W<@斀p&ysT/-N4b]63oOijؖ*{5ڠAĈ<].)n5U&pFw痗bj  Rprժ~OoNtX5Ybi0 \SWUeX1Z G/omaYgܳ`]Zxf 8DzeǣU9M)$\ovA{M(+A܎R\}pFX`UYM*Ki-•$\2Z:m=֩+F8& x1y[]'hK?a7[Oi7sWeX]ag+>-Gr竏$aknXjL /5Hn_W=ƹn,̭Vv|N'oNB5tCg=$Y^p :!SQyle ې%"6_NnmY8jrrS?;||R^s6RFJF G7mMuc蟶0=jNrF+z%ĩmcT}c)Szv':hpDm8;l8v7nJ.Lښt8x~)4p0JuSN|{Jp1!a쁹BY-;*R `ˋ wv#z/P*?e炎5Q\$\vP)2` SNNJXNhsI갅a,(mQ2+_ 5AGqty<cⴆRQ !{LJhftX)7!j[gUe#U/V.썐EM܆>o.}6W]k}GW,V_ehtåͼsgTz]fuB7M:T Z(G1ymaPJK2RY 9.dM?Zzp|{<5bͭ}q]|m-V3(-3 _^6k9u:WP DA:"E` (]u&wQO;]ۤ of<2/0n0{uoԛ?Jx>׹ҡ!AvN}ZPd{{>~9m&w43+,GZנԄRD3-V|@(sW;W+ %hNVLIsE)  D!:xdVESY$5FRv_@'BB:(xi%r2P:8\r _.J{nxlJ#bn![$>6c݌7coo |j~]|uṂEsc5 16{{D&)huV&Bω,B,Ȋ`lD2F!JјMQBQwJd ( 27 O1T(*{,+f }c"ۙ8[TOtK-G/ߝ}k<Κ;uX=̱'ߩ惟F o ōLYC5<)h8d0tQEmkv=w7M͉l /!3Oޘ&Ebb|]yOsŭRK8=ߦ9~y~/??.?džͬIJGHwTx/\A-ΎNSɩh tg0>jm%y)p3O2]bRCԥٛ{pHQA>9 GBŋ^XIiǞG,QH{O3IR$M5Q gf;m1 zL˼d5o6,4qӮeADa8Xq|M|%m,J >1%OU6*Ax0cTw{|yC-jm@+ʘ_$DC6F B ps ZzytZ㤲"֪5{6fJUP..켬rnBu⇹=ᑒm-RYSxj2[NHJe`1yBbh~:BYS0 Sǁi&" *mՒ3__A7l6t}=0B4vyzcY.U!IPd,+FҀBH&5%>=aN}Md:T6ǴmwF[SkT2i)0<:I BO'+g>9΁j['y.\S#e!؝΂ǹ*D&F ezkl3̤i w)B4 8kz:ngn?l\uUZM7'k^˵%n:&79O<`Ns&h# )NDJҠq?$ @LJ;xb|aYRmq*xUdZKЀjIV$0+*rDQq9@ uV'/?V *WXfH3Fxe,I H.9(s"(8AHa?_ a?ִ>'*W`L.I7pgx)4HX]FA=V$QOG1%җ' V(<q99I5O$C7? `kpd87p~ p/ęI1T"ʒ4pPfl}MMcCs$p#"OPW}zPAPldOk*~,C%vtAN_'(Ϗbߔ{oP$Zk+yDg#1IjrC*rNHϟTVi*Zy=!Q= [,>k>M۫dR#s#Cb\#&j6Q%gR4}qmDӧl^ÏΫEpAsn RwzV,7]0B|9,/`68x&Yדdk{qkz:_ [ߍkY\*oc  +3g1Ayorrk3u+gzl8$w,q8W*%)=Űl͇RɛY?_|}vj*[p6A%I]y4f"4_)pD@o\i7ekM UJG=ޠ]2޽~{_}~ϟ޽:y勗'o_=ǽo^&vu3 D;p뿿]붺9 ^U[t7W95ԯbP@ h-g]&:D'QIUB#3\# Aibh (AOzhC#\ʊ{ѡ`\(\)" N 8EA"p`$*SQ:/0>'\sNJ&X*uYtvY;c3z ђO4R{*JL*Y#IwxpQ \ IsBs@6GHS &F#mPz{9̤ᤴC'"dH<GoD cqpᐜ]mH:=m-dd4k8DiB|u@ s_9tYyeho5h2"=)9!} xYR(*!AYF>A۶>8&dK*uLR2's\%BA}F2c!`kUʑw1\H _9n3T@n1 sd~w Ӵ'y2Oj?[4'N= ďةS]<Immˏ<{ ^yeo/NE/χ6J_<]Km~a7*ł19Nrʍp,Rqڤ3>_=9m{OB/UDy袭d"[šG?[oxIMw*ޛ}9mz%qԗo?֢nxzM7h:h^Tsnw&A򗟦 /rg,ӣWcqvFh?kWϔnCGJ 0tZ8NJԵRi,@J,][:%5IL6]ouMG;&<n4I=̄(=\*g#HR!H>v&Z5N*AqH,RE4q.@.J& HJDSG Ĝit3F%tyMBw a q  8|u98 u 8o|$))~qpKᖮO028 ^sD>hT?X?ez4b,Cز4Z:IaeE ^DB[Z{K䴑Ugp羽f5ػs ]7'PYF`X`gmۻ}2)yH4:9.Q5Z3TPVSiiyN!P QPF.,D#k% Rid+n'en9x%x4N:OٽϖnQܑ}NQi4 U~BDk**5ET*e 2{b6G ӻX.v] $,'%:J@x8Z*?@R3t3ZI2xUpm0#)"t,106ldbC6N0r䇤#ID~ K,O7tP1C\M{a7!ȁS)JlhKRΨI%$$2eZAJ~V{UNb.g3IJ SHe9`:F'. t6$Brk:ewǜk\e/ 0/9B;|E#L<(au<TsF+,QjcH@Ez=x}Wiڰr6sW kT1cSL'>&[FyQ[(fڦ4ykcCyq}%lv2;גSPWm_XÈ*b= ⠘sy;BU抣mJ wu~7| ? ֤ȴuKewz˯=YZo{#rlaTޒMQ6kL~hMN! NT09w'j}h§~ޜ*+& ]GZٻ[s&ol$5}柷o*^uO@'3d>M^DJ WN)mK Q] H.&$fIq<3S owM.s0M@n*[C+WfծY5.W\K/\\\q \ U F\ V08tH}>Ck?RfiK A@jj\qKqmۡ.cU-@T9 w/(+ e_RbAľn$u1}ʚ/*J7*iNδiʪPGP#}*3Δo 1*s+UAU(L/)Fp^P G@bH| G*t4LْpaZW Pf"q#W` bpEr+RVT qV+ +-W7t\@F\Ȃp1*W$1Sr\5*݈z%J"s QŸ+R J3:D\Ykڟ)>H*j^ HCl#W:'Jr &ʕB+T ZWRI6}f#rU'o1q\u5Njk/l;ʎ5 \ABZbpEr+WWҙW+h}jxԄ D#fuЄLV7,N<:, sbYo H3C R2; h2hA"N++R+qE*quRLW H$e#W cbAT; Zj WZIl"+~?\Z>+R)w^++x9#H,&$WұW+ǭt \`P+Ij:Pbcշ+̬wGLpka"X0,Z_\uR垦tS6^uqkܙpbpEr-Wv_uSĕN0VH,W$T)"zT]$t`K'G1c{xvIdk X)IuKWAKgNq'ӔNVLn}Qpƨ`!֩ȶEvૡ=J)r(X}B7”ȑZCHfc wHxIoH jJՃ&GfhӓG\ | / V++@jIv#W \`t1"V+ :H!ʀqF+t9ʀP̘)T::ǾCĕUXI](@9 Z :m+i1q-jSK[ PxE{5Qfշ+}yFqGL9쮺 ~Y^'wV]U*6$\Wd=0SHbpjC;q%ԭe> &N!XpFgtK欵0gy<JV-nn}[| N*WiQo@f/q7 0͠ \`'5Jl *qu8#kw,$/W$\)"+T):@\)UIP++UD&F:@\Q7 \`ie1BY)" +RGwu2s+l,=\+Rq*ňĕQ0S P\Z|0H*G\$+lqW(Wb Z9vR爫qş9;L=㪛\羫nj͞2tS6wqk+d1T-WqE*fpJ)u=%hhQG\p#K:[cmH@p>UY& (btq2Vu蘳ǜ-e,X[-[cjߴmWdloϧih./׍R)5~OzݕGry?9Ղ͕ #w|_[ۿ>Q7y~ r}Oiӳ?szã_>7)7A_G!|!!~w4vg秴sXN>N' p252yvܸ_'o=myYXnvyPtO~񢺂#Vaӆ\z͊r6GbhmN2ep R:]\A-ǸV9kdiVv7nJ73,i}9>-ת&ˌ6~rl| ܿ]fXu:87`ku]$Nfxm؀7dw^~<֢uJȍ<ݶ]i~uGzcľMR]:t:r<k烈ϖ>Loid3fx#r#Ȥխ/VcEٴlmǞ>iokO6GdiNP~t>8i]ߗ*4^hW_XQi[V~%/*iVbPB|7׽ mq(gSk8f|sRh ǙP-[ E-ɌiaY3m85V n֊ֳR*9Z`j\ \Pcȩ`!WmŃjr{<] ceZ-'þr2cō=[o/ⷝ̃Iz4Kq;rm=+;"2k49.jy|ML&fu&}-ΨYG9:ftC `coG7#,h Sx/tSk`H+x)Y:5}gqӔD_tF_ A7mrH: >gg?/!omoLP=7CAٵ'v_/-GQEU2Be>Rx,ՁeVhTyH Àkj/\Zmކ4ZH <"ڃN uϑkor\7\Rjw XWׁ[+={ki?u(=uyY7Țăg^U "Z*誘FeS=1rFVY9 #˂1Cp6`[Z7k c,DvAʺVuY - PW#+@PFT N)Z&fZN "{+3'lor6 b1e@VhhɆ^|<;zPuR^|U_>'+q!(:.Y)6)8 ӔX4G[k%j5K<@zqtF  : cQ{t%޺cm\7Pvböa;7k< {ȵB EX)^Qzok 9WVΨ 1A˵39CbQ;4`|\7tȕ֋Fn[aOy ݺyL\y~QVT]GYv҇(%QY'h6TjkêymuXKM7l#lNgǹvs%xzWCmu5 X|579O{}7P^_o['<[>S4A#qY)tފ6yUqR s7V:Xz.>ƑGǑzqdwzy ~3xAg6,;L`اgwmH_ѷktm×a~Hrϡျ_m=%%7upbYʶ&HlR,3p^pnB!Dp'T)L {zs)u t@?FChjl[~9rԌN! r6V|r>b@)m0óg$F"Xƽ`3'AY@ -FL+- u,7`1;-yY3g҇xW2`4b2%$NftאZjLEtZ bVh#"7XR3p71\1FdcPQcОmO;gr1;i= j"jeo**<_Ϫ/G_1?mnSCՠEGuW:ޝ]|5x_K.hLdtqoKKw Vo?QYG\EqvVs#)hl1Xs5.gt8Y2wo>XO,-ƾ);`mFmݴѾ􋣍/pFix|%Hu,䀏GR}źMCIQ?mDn>ShVOFÚ!a]2D{.,r\@@3Y &wRC50fedGyW_Mr6,F*Kw>mk^<$3㗍Nnzn29Co2_.ݑyYLh: پmp6=0MHiPCuwPPKÇ읚wytv LR+?Wqxc^ޫwtFmwceӺfՎjbvM.kJ! +Go׊gJmć]朕c+OK" 9s`:TKթTէPKGh- d䠝 a&%ҝɘwN-px^" Ym|kTjnmm0{{n*5b@yLqaeU9I}DMKf[Зf0S13-MLYTю*duqO9Eɟ39򓥎޷L+ 6^d0~HVw8b]+b^fTGk'4֮u-zR;nrr Hw^I7Lc :.I^r :!>IDP5׿h{T_RY"Xq|1f OPjZ ]6[TWV-.mzw|c5_,*UWRU(J(#v;5ngqۋfqn[;K~֌J0FOE'X "dr{VRBDik@:*阄-ȹc%M; +{Y96B0TO NjK?ӖF]/^ˍ%m:.sUM Z': Gccfh9[Σ ,)k%9ڊiNV67uYd10FAҠ02lK'3Zڕm8"9b j>qzA:i(Ϭ ʌ I(Y-G/E.d,8{#CA8I]R!wFJ&z JP)!DJzFjqR):Kŝ`_$%/X]Jsҗa:X66,FLo;I.6^}~8O$6a}UŊg8:gG'FptGWG4'jƒwY핡׊i;#KkrM0|Q'ufF?fn0NjjpjQ%ٯղիʔI#-FG>h0fՃ]^^qJ[H>|xEy9 /g obAE̙u8Σ|f^욁Y}PԍWS;J%cM#)8R 6Y0yY_*ScOjE3Xfj91gٮ[{ {C6$_޴%-o't}9%d~PSq(504/?j}tUN̮% òE<ʀhrq6lS5?hhxTM+Gax~JL߽ۿ~M}|\Ƿo~|k BemWG$GãPooЦiЀC-b񘥳)Y2ijIiNJed d~DSq:0pXɟ߮6<"q?CcX1GY@ƴ9Ϡƈ1u 9eޭB.𠢤gЈ3N0 9UOZGŒ=Y.gY3Mm}pL,A;t1جNZABV 0̋/mm}o[ ^)='[31] u0L9z=%,Oގ7/T }}fC,`\YXQO*yXOhyh5C^}zuO_ZHF6&D]R{X&<+ɒiB%-$m*I;WI-4%fHAom # !X A 7k6=ԴFs,R6tYKsDSrkYaѭeqs3#-GGq<=_HQGh!N-@ *1*SP\$E g=:!>йL CdKCcY9a@PH&-dR!c@Is]$_n&(rKe|ٕl\g`Y[D Yxo0mg0o!Iꆄ<v iUzvorBqme=G1 b>gڪeN'LY @MeđaLjVc2 uJw'hϛ69;BLe8>x!dEB1PeިFc~K NTLĜ2B£0)2hsيL*rU-D>%{UM2`t@/JQ2 vևOVrBZ:ecⱎ;މs5o Zn7g2ZNjCw2)/NҬ5Zц%fV"ܘRt{q{*{Vڧ\k~}r@>sxs |hv>6vxTrk`4){@|:Mڕ-1Go$R7Ӹ!aU;]\2zVk }k ۛmi)ι ggNq.CqhS igt "K)ALAe#qLW'VJ셬QihS[n9ʖ2+"WE5*ט g?cyb kfړvA6)Oe dVaUDjO0 kHɕA!EYP k2nhUQ.YÉs5Y$ՅCWA}ˢ[҈Xq08'FN Жһ\֐U] `O5/A:ȁ蘥 ɌmPFN ax)$mF:s29&Wd]\Jq#KI4Jc :frӳ@^ᄅċ/DKQrU{&;Q J ee4Hz8<:C|֮xo|XȽOG(VIH+ï3]CHV٫*imʧvkW}C8ˊ8š,Ϗyep(ܿQCӢ.xJ2v ]f$pj 8|з{u/|zE8d 0D|v ݜz3ܵc_w[^o0೫T+nذUNf?v%bf닟l>eEljl|&`?.hv/kwofbb mƿÇ^Zϰ,nKv)Qwx6n>)Ʈ:"w u9L񊂙r+ֶT)rKhȴnZw v6-z2T-zqSL $ՍA֣ݧşXrUٝuTVR(Ośmh {iCi}OazQ[RqnT+q1]QuFK 8~8kr͈t&/xœi,UGJEnx[;jphe!J'ztբoO8]6b4tZ3jh:]tv'zte {FDW8mh pɸONW@r _"]bDtnD\;gW ?`(/ٙЕ1eRIlpi>誡骡< +o1=`ѳ+}ѕV (_=^y~z`Ͻxap3k?ϤR+]:;֍\OcSc'&t kˏ)_{5Rͤ:|B[l{n+ iWSޡ^Wj5\+ {vU5?D}z]mFPuD]~o=zr!qۙ +jGI.aFD*rqei'zb'w#%UtZQ!׮ 5'Eď &JErlM$;5<5M?g0@/ê@ul6^2RZWN`dVQjdM.HdЄvM?|ޔ;Q4l t fQon@y.lR쩸m0"`촥hAMx%Hf㝱pt#{C$GB"dPF1fVU,6CQ4RWVʑWoEu{M54GﱥZI9*m!`ʪ%e 7ER(BT1'aZr-탐d0V5CbVGT9BA-&J8_S AXl۱G@cL{}&0hB,m SYs{%4) kR!TP2Ɗ4yT0ИUF\|@lfk+Jmᜐ[# u1^x:?r,rɭRtHR%Қd1b$l0'P^ͪ$YXɪR(9QJR-NTkArɱy}onH>L^ ^d C\&0~@_+&zD1R AJdM }I(ZbsQƳT6f^d$ԃŤSҎDdBU"8x5T')0sC`ȑ-Hؐ{ Ht=+ .D; X{(Ir..9 0/#>iuXgzS 8·RYa4_ tciў_mz1.E=144_(*/,RvNR"'x?o.`mgGmlי8ХEYtf>Tnwjf=f>]*Tvڦ:LK/T8xPútdnO%Ӟ"Cҥ`suARnQ L͆y 0/,B4N"&iyU,C\TT y:c4,= `ȁ}QHV'Vp 3XhG.8tXDu<*PJɩawFR D9%B\VHl'*aOvn oY|5d*"v1IWCj,ZӴE@IiayGFs$>H,rZٶa;\ a_)]x̢j֭:emFq^k<:"GjfHƴ?y+7|Aӌ<&w:j7ܔ(^ 0 尾sizE^^ȸC.|-=hzS)ֽn[<ơ Rt 3N!SRUazT {qh=!jV5rN}o!ŝ(wF35Az rU74ۤMOs&}Ե3H&sۣ7ն!5!~mSG} V914tIА?AjMop]NCwcqU>`QiUH(b> u*rXvCim΀P+kCBj; 8l@}o*AkOf;6QđX2q;3A2u- pQi|S1[wuZv4e ڦӱQ&MH0 %=Ckk-4X[*FG+u)D!Wn0jL^EˠVwKqjsp61U2AU.jTW%굻%gP}rԹʡn\!jJ3TWۄ_}urXŏW)Ūm_"Y.h9oެV?/tzTg]9|:t}2oX.ZoMo6wgS2#i~Fp~\5}|u˳NGr.vmo3q֋b@gWl:*pO"(jD}|@$7Y.> MOZzT >9h΋H|@$> H|@$> H|@$> H|@$> H|@$> H|@4Wu*dN> zѤZU*}@.$> H|@$> H|@$> H|@$> H|@$> H|@$> }> |@WVe6> 5x(X$> H|@$> H|@$> H|@$> H|@$> H|@$> H|@s!Od C6}@!ofJ*ZH|@$> H|@$> H|@$> H|@$> H|@$> H|@fra}< ?'//h)z]Z?mwUvb m)Yk:ƶ{j-Ab[m鳮Ѽa+W- ^p5C\t$8MlZc" fmP \\_:HeJVS1HeW$79.9*UԒ]WY9W$Z."OqE*\}3r{v;UʩqWƉGjq*) Wn]V1'ZW6qE*\WFk"#\ rb+Rs"Vp5K\w,?Ka`}Nݾ* ޫWʸKoit_[;\stg|u5/]\x"sOw~xmumC6kV޻- N WKQc~Y.pOV}vmyLâ2GjXwVX7gG}ӷ?W׈]6ycJ,4<"o5~6^ kUj0t3MBvgt]㺶}_:Uwãf/{ݓ4KFiO8q}Tv *ZW<\ܩ4zU*RjBb+\JT&%qBNqr>s`'Z0N/, #pWz.`j`c<\\z]:Heiٕ(2A;$칚yg䠒*Z{W>:,=Uq9k7@pʎM9LʥZʜ9 63#\Ap HqjCeVp5C\9pEe+\AgWd qEW'?{ܬf3qŝ'%: B1(F fWPks T(#bNs*!8W悫*?Jj>Jȩ`]\WqE*\}3{v}˩y mڹ'Cn NF}xW' \Ϸlf^ݟ n0\#qA?~CW_ 訚q᧻`.]b*"$3jlLUhR 䋘<'w{CwtlHa3ECjS "pi '\7,'7ԆqE*KRGpu\ydg2 N|f!W+WքqE*dWsU0:(G6"͌2U4^o +g :KTW3U2)[NHplp4@uGԚqE*]Wl9!HpNlp9pEjmˋIx;Ҟ]Nu=q5JppSj0P8q"\S +\%C^)љHp+ \AWT#!S)[F4\\ԆT:Hei: +e4Wl[X>lL@jB5:,[|Gn\>/f.ylRk]ytQ % W$8)s *6ȤVqE*j^ c2lpErs+M&+Rj YigWT+RkL";%j>RS1H)z6]AeV25G\tpfr78.r t\$+戫l+ΆɵlAR닟$;P\qU޳ QF vbpܩv*{80\ʂvE r3\p+ f+tڙh+ 2nj ^q9k8G> ͊ Hi9*$$֖BJmv+V+ 9 f֊HplpEr3v5S2NNE RZyF"x&Z"V9* S \&"Ε+RA ^]թVOHab\HyuWTUfUZ폫 FY#\AO|pq❋ǩM:+:QtgIjFsʕw@Z-;ӤZFߝ,^bBۥKeN!4h5>N (,p,uyTڔ$oaLV:1 f4*Crc+R[ Tj%q)u%bySz @R|%}~" FWPbcWPc\}3{v>U.&=N+?{ƍ俊?CZcnMp@bcGk5㌃|+CjiHc=b,EVa8]*yArt&tuB\+ܩcsKRW0y\ m.!/E]%ntuSW" p?aNQW3p ^J\^XRN]=H6/6& gm[{.pb _-IjW]Ek,Ⱦƙb7ȗ}AHƊsa8C5# p-j8d%9?ۻO-+ ֺ͙\̐8obf|1I. |K H*,)/( -X?MA樿,[Q*yƾlǿ$yY~A3| ~Sগ~wêE62'bkMW/(̟˃{Vo*1}nzwB Vp0`4Ϋ.5z&Wn9GCu3/mEߙs~ z!j&= F:=8[cR;ĬW逰 9`wE%KS0?_K̇Ŕȩ)5tr۟F#XBv}96;z?5?- *54vEΚup]Y^T#+ڬhp ̘-M^%+")+37+KUY~vK ƹ#} R0YNR43>|~c;׶՜ ղ@@y}V/"O.@d%p> sf8L(C7^Z&2@sFg9[зbNTj׽W(&&d Wl7݇ >4!uuQT#hY¬w++hh+?%B k, f'@I`'TKriv?mJ) fD$/h;((EЯǣc+Z A681XtAHc05!Z1I ,ZCT1boG,&h,A,Rg2DO)SPp&Sv))9 Q Ƒ5t"aS4HAyD1]2&0_E8Udl&Q2ʹeYOqjiB@_휼 Cg G6"#xB\ȥ*FHp䩤L ]ܭ]U)H\fSSmC;%d}s"LyMT'|lQXMln$kͦzFFfa;w$ t^}`0Z JgPh*;lÁޛA(AɬnGw!(]z-[54ws I(Ey1ˀK<5YJqTQ( Ȣ`;XvpX+/B"OiZ!qY9ctv_}sbD}|QC 0(nJ1+>\cL?l1* H wE犢.E]+N8%Ay$ SѢbR)S-s3!ig[:=TQl=P VA PƽuH ÌМ-C +|IPg 5ڳ|}'rlQt {T:.wO=cL1^=^6W< -a0 (d&1[q+oJAg9`g:9﹓<cdOdYzq@^dfJJsPF2 4F΂6P"V6nZK=h(#WCM>H0M@-a$EN 1閁lkS=e T{7y?yj7/e=5"O8kXT)JiEe[\``=іu!3HS.p9ޥ }Z:-_mS]x6?O0ND;Le0Q[:\BzSSÀ^Lo=OKRl[mŒb6 I3HV*Vݶډ·ŅKNUppL i!!E í!lF)Í"$E뜧۠wNkmSl#%1Cd]na$=ߞyR )}zzVTѯzd9ٗ_/ŝKBk!kA!~DfRKy.Ma3u.-=nxT3]tMQ`>r-1Ä>g+:-PH>b}s7M7; dz?~ӛiqPy^>K+>D_z~oE:[ ,|17 'ĸ).5}pON$v7?u8fmPzOZjWrޢ V`*63oXt.T-{7J pNi(&w㆜Wb#6#[p'$\], }6GPޫjJ|~6yH~3&q<|muj-տۮ hrw<]@ m`%&݋>NZ&}S='f?cCuảA(!/Ť%C(5e-џhc9vs|C|OYDQ/1(-8"A.:IltHPd3(tݦz;yO{#>s@h0$-c* s"U/rvŒZ212lQS @9)V^@ey{O~MC[mow3Q l^܄.Ù<(1>ՕE6'3B49*d"DK2Nu"XϻgOGīΙq{T|`/qd&3rʅ.,W#&tn/.<~lw 9ZYK0X`"k"ۅX3*O PJURs )IE,F",rg;,PyC<)ʲ]..;"E Rr h'TUY `F! L>ݺ@ۄ(=/3uX4R 2*j r`HmH'rba~DaaH)+%,?\'ϵ47\#K܅t@>8]zK8ю3ICEDjw"C 9#Dd`5H9CRʦ !pN9)4j|o76IvތςU_Sp dg.zEznh4.}ݟ O _^݇WG=l.f@T0xv{6Ok>|-F"ҟ;)( <vg޵q,ٿB]A`]F?%eRW[=CR,AkbĖէ_?[--E͒[9X]`,y^VLU ժƯ JN갧4B\j.8rI\ۨH_GE+Q˾X^Ego..U4i1<wy \\1{ WMbgVήDs 0.hyL[sm}xqպpv V`>/I>7-+EˮHhM~>זWnjړ4ym*I%J7>Tޥ6E-XZeC.oZ^-UK]VrSJxybXOR V4|>= 3b<*ڏb8~>r_g}~]:xնb`TnGqʈhz9"`<'UsƆj4UOۣ0>=Izͫoߔ߽|?7s_ǫ޾o/h: ᷝ(7E竟Qih*oYDCkj ^4]UmVyCR冏$ b;1xV9*˲Ľ$zO\O B%C2İ M kILY"mFΠX}568,ɥg_]2:, 1BHԈMF'` %2W9|Pl̃uZTDkcbX=6b%5ŦF+.N<׈U~\w8Jug8{FKN'S$:ĊUJF6a=C+;G2%nSNֹk]Ƈ&.lL*8m`%#!KLۢ'BBM޹NGRɎ~ ڨFc d0$R8= sܐdM+>rRu7ۛSn"2珹lŢoz I:)m]:CR? q$ $eLb QfZ.̗=N*Ņi],+˪d3p!k`Y[D Yxoa4mn !1tu$an gU {lO$r^yQu#p~+| "DLuᘖ$v+iGhCN}TY!P `{(Su\ΉCd#8 QdbJee9:H&6ekRȂ$yq>*~>X=i#ބqCn[9k+w>?na\`X`OےnowrJfJ烔4De#He"]R%[gԁ'%:c38XǸY,Z]0Y:}>cQIs->y|6&ץսί=-鑯)jb,Yjx.yqbҘF0=ҧv/9^xVYHUyޝgj1]N|xLiwv~v 돯wCâ]J֐4yQ#LRȧ@ B=⟵QٞЫB}](D蓫]7C^6?xjy,?,ѽ 7yz~i.o&Gw~=9?O,;jywtZߦwgYCT/?>o~hUF4G_g+$WU ߟk7} j* (@EFY^I)s:KBEeCEyDCf"1^$e&)$=7y3 e(]{pDLYruQYe[vjY`%\`]­4փpX6z⁚3DDRm-8 <ӓq:dd )c^\|&I!Qk}h:dV)oSC\M{aߊ LLц֏8K}!dE$ʘT$1˼QI9gG :xmU=5lf9eم ”# ̧Pe+JTyZH(WCXv7瘇(@KC$wR)'< sCsb^w/zJw1hؽ ,05фx.o2H;JK KͬR ɍZc\ JΧKi6Gza({W\s)9+8.$dZր8|&9=Wϴ"Tg8.M\ {uM10~lHQy''g:\t&5vKB{)v$iY%EVsfw'UfczV[ **$y$玁x ;x ;x x#rs^*y | B(P*IB"U[R&%VY Cܦ=b&.WIxvnѬw#ewȚ':"zl?>ȻdBPwExPMHR@( c0m;}0cE9|kن2JYSLN7hAl\{1 CڂsDMGg;26]cZK825TFjƭ&iLDWo&A4&W&{|9Lgˡ}Ǯ8٢:wH/mыLZG 6=S* Ew6cW-#ږ802@)1qĐek1@` 9dM#LgCr1x$UFӦtH&). K0Ҥ >);@s=ZL.:[%[l)^xFK "W1K \y02NPr(IQh:րxAx3ζFǶ|-@atku6O5Vk`/.vhc7x?zA'KV7Y ܵ#T:%J&%lȒd,]kݺNW.UIOT骠z'HWS'*p ]k w J ]QGtE`. Z|rm Е5]?tEp tUЊUA)@W ];v8`(ؚ`_tu?`׊սr'J1+1ն]Qm{DWH7tU ]=vrmbCWB3dufnS%P1 IҠ @5?;=; i͙T/$-8q\Z$Py@(~G؛o*r Q@[Dm?WA6ŃUIT YQPDTm4D+N#V5F#v Y YE!WF*y7 w\C/?/]^URJ_.J0k]A]nHr q}Hq< 9@Ż׻md''c'/p~h4u?JwmI}B_,ewo=~%#")"&)RN4lϞ%auuS35*92W`|.ٍot{݉6Z[ ;| * ʇv~ d8W7Gv:d0@bdߞ~wMQ\p x]gޜrvC/ fm}3SeCEakjmaDު\u^wG^4[[1]?xٛdYćG/(/j9qgeo%n$)YS=ef,iRtܙXrH&s&U%Bb⢬W8bX,,ZPųatrv[V;BCjK)jS$J@wlC֎j 1RǴ6ʵ"BMT1Gj-%`2&`%m+[d9C6~Z6ڔ,k z F5K9'%CsJk*Lf0)W%W5qUDhY;Xz̄، ٜ,D՘dfC &<L̻{},!6eE6\ 6ʒ&Sʰ97LUE?eȏ|Y*&Y,bhZ` cNߨX[N!yD!8| x]'h_us`oYYg1&^uʈ L9S_-~h'!ާ gjnMaڂs8*J.mlykIg6JK Lx}gtc$פ5b5WKkr‘((E{_A[Ƅ4QiuQJ>$UNF7JCQFi&CltX,pď`prGTAA. D*D hBQC% acN ǭ` ɦ= .Vj!T@)F@(  UP4gIwB<@Æo!Hn|F)AAr"1QYH@U 2*EsR[ R&j-fLcU$1RA db 5 T+-AD:3#a Wֺ/fĥ**+f늑QB%!&$_r }0glI ƏӭZ,WyQ'*v!fѱ&#%VZe=R Az1΁6DRA7۽_!0ӲmA׀R"ČacQ 3-hg+Ta:0'_y>M+- /y1ަ]Wz\ߕ ]], DefƕaCd2YǨ][YL1iӄ!Yd5hvah31F0lfx6\|QiFjlRѱ_ÁvCRB^(uS.C^ΐ Z(Dt&%puHw+T`=@C:C&*zx!0A=:q^îT( vlE]1("^SS*Mtr5p 9B~G?EQ^0 dZ8~˜.#HKb$ nUX~ [:QH!cR5 io sa@:hQXAZ%jS=4LR@r*b}C?m标==TlT7\qA%V4^q*FMR7rL@Z8rԋzJ6?n DЃ*9]G՞^F`8e_K f րo JCac7\8VF˥1u`(YP Y 6ٟm#8$C,C$(qEP5A ]*}"W>`j?&ze\q)n^6۵aY4^(PeP+ y v\roy\}4K^ ɖfK ͧڗ7jϴY^ۛϼ9;yT:vy6ƶvћ9c܎u9ٮao d 7xhTg3_@OanC!`\l ѭ%J7}m=b}v~s|a7=yqzWxrUw3G:;gn֓fr$iOK+a9i'v/?rywm?ޞb ^}.P!k, D~aq(C~o~K U%N/_YʟW˃v寞zC<{x ~{/^7?w >KQ->_n걧[~jZVojZVojZVojZVojZVojZVojZVojZVojZVojZVojZVojZ>V>0lu"sE>mau (V7ٝs{c;Iٿq9?^n.oxۡEGN}wH;_nO;Rǡ]g+TɾQ,*'iexr_%" lXֹ&8vs_2eۇ]W\*@,|Hov?/ɕ7Hq*[ 18m %*hB`EZmB"Xއ 0|``.ewܠb(RSyrW[I:LbcOZy zH6.UigD`͜ǁaK~gN?7k0y9xtqByo?*^96{we`mE0]ku?HCëysT,fi5uL_z+6Svy ٗA?piu|{f`Ho<#I}_t~WzۛOD'-+ч`3,>o %`!V#g[{6h Fd ]eKjԯ5˷:#-q7ny#W'pG>;rsBQ=F&0ug-ty;ݦ/ frcMNr $GZ&/]dN[i7#2Z@<Bf$䠙ρ( C -t2zEIǢ=Th @3P<ۿiy*s[ZY-Y;fW)WNE&{@qINCsZ- .*f)h<٪(=!44m6w].7-{Wn*]P׻*/lv^WJVդ9;:Y{]X ʅ[&C}z٘A ~5^'7 zдig5$ < y:PRw*H)M- ␇b*YwEj 0R\5ZԀ+=k$ɪ|^1y_t->aU;3ލ'~xrӉهHmScS~ӫ|! 3o&x_{\ɓKVb/PGkB93b"Zs̖y&eiV:AE@4D?^Ixwxzu=[1ۮl3h'Zq ø |-_ |-_ |-_ |-_ |-_ |-_ |-_ |-_ |-_ |-_ |-_ |-L SBR>SAveR[Qe`&Q4olqz}A$$Sfo qVtiD|gzCv@'?H&e$fDwFS!"zV4Qg3'1փ8x4@bP4S!z(D8bA$`1qd,isr>`eCI)Z~{ـX^3R:K/^˵+ t܇=e Zjf *,3Q|H$qJc#Nwv('ifV^7+>`ݧ+\E M(0-!HTzj,ᅪ)sU'P{GwT9]!!@h7 & 8@:[T@U5%ȗy ڂ͝PJN9 Ԁ$z2ro2s۵_?s 5GsXoEg `])uvx@j"'./_ä(w{aq]0㭺gx N<0 4> \mf;ȄQB9aG)Gn2Ig~py4<;J̏-~Es.K,5V֋+?ڊ/A]K'p9u~bc~>]]J-Z]xFcat09 7\-GyTj|q0"tL跽kO4?7;wvّ//^gkt> s~r:kYnZve:Njiy}˦ׂMwl㝰޺;m |M7fIXn8ju "rГ{mNoޕw]<䦻n|VPW/o!-ϷqK_L@{\#\k/ʮ5aoro/oIj .}u[fWd!p/ #BZf_Ag旸[gR Q_gB_ͻo^o7/_{7(3^zw/a.+i 0]m[%֭;Z5oqkn^U <,bf}YlU 03XN MawI>s?)0J\ fXdf+&e:e(h>B4K RO0Y41%b7RcFp.0c( $SK҇6ІLu_ȾNJ<xZݣ.Qth|Sy#C#wYUep][[[[[[[[[[[[[[[[[[[[[[\\Sž |AI |AZ-;)Y]AN``8EсF(lO:1li|;0߃dy0_eV~~6N:WEٔtWyS7֤r0ߥ6E$h"ƅ}j=X!&az A!R]Pbí)&]Hŋvsz 2 Atphu1"[C4yVM*:Xbӌ$Q5M?예/'0Sw#cM` bĒVxv|e1>l< X3\ ٶiX!Η21\8)-7^{;8-ef3[)sRd9q~Z3p n Q(p Sn<3.c(8{>q<yqyyɎ=(ճ|0ITtMieJF Fmd J ciM$N["zQ%h.!ՒQ I9S)R4C2O<GcT I͜Q <ӤmYC@fIC8 Eh] h\XF9Y$=1>Fcq6*HJ˫6AHHQ2gV;j!KPp)0x-7GY3[Rr aɃezž;BdNSx@O;IhY@dhI9pV5`74B{dyQCJLzŰEk;[o7 \\V%bcԁE4dMf֦J׊TRו\|Uq] b}굥@}[&Ș(bbn!oYt3* ǂc_D 1L8!VD<̭Ig6F\)L1g:1/p."o]G݈ۛ͜¯:1..jpi쉋ZE&\pq늖,H4dD48Ȫ# a9pJ*4 ;D0ֶEO\c >Ğ/|x~@?mrXtPeA9lp4]C[[tukGu N]®"Yc"bFq5h>)% 콊L_,җ9 [!p#$SȔF6LSBwRNT|k=  ,QUJ=J)ӶГvݞ|g6\c AIfTBH&2t0B ZD1 *TmO̕* !eN+heMTJ&6 ZՈsNٽTΘ8Yfo Ruk]|8H 86'M={zytK ͚P'O:UmnXW &Jp$2xWmDc{˫7TCzN8o*ZoS k}YX^ǚ"KFv%5.$Xn!M$`3X*&fw%T!ꔥtU/9]qTw;CRmH1IWR,iu?$+#s3ۮb~.zo7eZIi?+4:}z+JYPxR(d6Rr]} }Ǽ~'Yq}l퇓!EdX=On]l6O-̐J&]Ccp}b|W,*5Y72aWf][1hʊ&<\.-F"Sw 52Jh;Z-piS+U[ [xմ'-~vַ۵;o6>Zyqқcvo} kWn{'9zNV_\l9۝C|A ww;N7>]c:L'wۍk^ ^\(0#EkEj`@|j lكe;0v-.f:a%>Z՘ oG>M}aԂMW*0 8 *T,TrO_Tc{c*Ţ3Mh^Z҄!ӵc`: [PlHZjؔn3|Ir#"_{SCgy!6ضduoe}:_M>x~)Sh^kiR\-XB2E<ٛ@S~R5{h +V e)CT#D !)v}yh(h?`XˋyǙ^XAq )h?J+ǥPIA{W3Z\:+67s,p'Wl\}peYSozu6k]ᵟ,9otˤ[~PK9qN.&-k|ȲGM uv_v2,>6?,ޞ]^Vkk>/|c#j"#bMJڿ?ǵԪe}Jݰ{b_:n-B#Dʉ4~%ɟ$w}P\YPBWA>up۽ZsİV69~֠AI㏆5 XX[;k`+15| GWGWlXYYq fjGW`we~Ȗҷ\Hڔʴ:R@)I5ɇ)z;6RQ&8 \JAv+y`d))Amp| 1jDԆٝ 29Sye] M7cٵz#m.2S_crҍMYB+j=Q9ejp 161bv6si*c/{LT|hv!C*!jiMJ8yɭrsb~/1־jUtje[Zӗ5uZVoIs:\`O{]T@,PZ'1XB+e&/c]Gѷx_л@Q{%& NšYIS\`8**&5}zlp/M_l6]#v=Ĵ^iyV?֋iݴlh$RXKȾ*:H L Rpt[TI%ƉeHO݌\G'CiF$Ci!CJtLISD=L%+eHQ7$YonIRb)TOB)M@,$ tdg]RT7s$f?z-BhqpݞU+ '{Gr$%٦d(9Be T)yu.(l#\LnJT2MGPrNyv3g@-C$e.ÚSc7FE'w_y_컓| =~jl4hm>y5覵&rݎ3=ŵkocʻ1cIU:XC&ixVFVA+y:'RŹd!EF]%).zOUb)eqTCR+d`0V͜du*ݰg싅3 7{zߘ嗃=E-6?"fΞ=3#vCjP{e S@ǟbK+Z(oЩȏ{))ljZEr*IP0ؚK\{9cAnޱ/jΨ'Ԟ6{U@S:i`4$m$Dc/ jc6 LkٗxXu1'ϣ#dfEg]c VyVϖ#cUC2sP]\9p4` "v}1tF0!℈[9@k[`n5d5O:1:JMe5@΁1/p."o}ۼz:ظ0'PyjrjÙD){ył̑BcUʽ+e{SU'ŅX 1uv=qQՄ.n]ђ-(oQ$t^VQ}멵 44Apq4락c_ѠM WoQ /|N“@6Y!$)Tˆ0,*36{vݟ[{lCb'eP{[f .9fs"wF{̋RԂ٥`|DgD Ǟ7^ZAf x>^,X1)|f>yR#e:!8lvCv1F +|X&)}KO`=d\-=܎p)Ħu'7fTq'Nؖ@˲9}uy~bE09C87H$%A(ϓ}F(狤Xd'3I.a.]L.'( SM`.gqzh?O-{@c*Y̻Sv(փQk7*nvB\\V솣<:E?! 07lu^_8*sVPnwܾ?ngD,ï*&Zn~TY;v]/Qȭ57t[qrY6័Ŷ"[_ZYpN':}h$l;Ė[Qvv~c;Vwy0vVweDd<Οxѭoޱ[Mw-urc:>N|^UyX5+~1K9 V4`5m-wuCËűűтq1-'PVFnH$Z%[ ΁B Q:N[iZy vzOpO9Zx&MgQBYBҰ} `fyI f'«IΈV 1LW 1L֘)aBb+8紺yb\z3&c7IH[ \k v;Q2"^KB*ycwݢwv-awSo s[OXEY6iYɤO\!B!b}*G~]q{t4+9|@rpWLLp`ArVA;"֫i3\8d*9k|ߔsսżH[3ԗjSéӥU{ef ڤa,*?zs3Ns:ӮuU5!9HeA`d)O@-[ضM$ Qb: U_j]2A B @uQJgYFȡh2`DCֺ8^AՋ=i1'_go f6Y} ";N`ւM7j% q 1I"9Ha3ą#\H8 }4xFDp [=g1#`K*YnBA"pUN8RbӈZ N 6HB%W֠ Xv jxM_]edn.8B]wZ-TKC׻x곌4;%p]5̨IJYż'5g|. ڋ(ť(UXA 3f2&sHIiҧł.g3SDK"0yV3A Mr8i#^7 C s ZH `Vqa1QVoWA 7+rT }>9z=s'ŕ$/C! DwNdbAORIPƇסi+KθI) 4qɲGs@V$mݓHuIΕ}*}K->!H?omADzfR82&-[dQW6΃tq=:cIu>cɓ『j##:RohQ43ƒbqGί%Q9X&*GdNvFt s;#ޒ~K4Ss* )$rLF(QdZ DL͗|vɒ< ÆcCW?!Ei4 >y"Ɍ`ǫDzr8**r'T4a%.DÁnz?U/ 2DyX!Ŕge8F5)bcdA[D8W\y೤NG;!|Y's?Zv'Hռ Ip,~[rJdAL5ɄJl ˠ+A p&fN1mc. Y8}b>Cc(#WA **jO\iNdYW%N @苏#ogG{g71 5p:VŌ3>9e\Q/֑&6mMOK zpF#MA1 IG+FT+-IXD(4^ iBȚg>=R ^.~5P>|zzoޗ6݁-wqogd|\Ń'D(\,h,SĈpQ$mӠx(UTF3hWyldNv*n3]&Pb^΂`6,T¦T+`\S 8.O9~Ճ_K:41q.Q4h Y2%t9aR7^2AfѷuZlXz(mٍb2^} M):}x\ZJ-Y!j+q#͙ZK%:&uF$w$5|5uW\8ag.%7XGO]Agkǩyӻn94o\m?L~o.vKFpfި7J/}{74߂~o}̯hnW8sp(b~yEum^jMq }[ӛۅKsi6xr^<iIg50f<}$|wh|7oq{?o.gGM3Y2wqCy5YBSvo/.m6m@?G9Oa.o]Pԗ_z?7?>9+)xK?]7\g!l{QF!nߨ-03!^q gm:ֶІx,iS͟Ks҃-i *ҬnH?={R ϝG?~)te/"$%㝦;sk 4pEڐ9L L"tfb:,3mM'\ns7OEE ɜzuaOtmn l@vQAs{`ųN]˂_8zqrG>ďBzˌR20Qz)!dNCf X =,N%TToz=d__P# a_w *#ɠ ՟P4Ĵmz礐3 5vvPzU^!'Mk}XQmnYb1FŊTąU}}PiV D$JseL޴HE=bO8(ѳ/L>q/יUrrm*g|]˽j#E%O&뿫1Ԏ\\F˵hpU`h V9o,)*)%by^t :#f6ֵ`v:@;0'2aT7a~.sO2.{Cd^D'RgZt6f ~0v31`FX-)vg0R/[%"#:ܪ[NzYiskDB2g6~zLy_="qE5 i7=̧tm۹HKvᙨ +Py "M4 PxYD*7VR }#ٴN絳?y_9R,kot7T ~!ACu Z8gm%!\1EQҨ=$x1PR|AS W|.wZ؅ۂVi-y6ЍƳ NBl ͠F$!).hϗ߽W?˷Oۗ?~m_\)@s >/ KoO#i}!E܈GzỤ+itbtu,)Ā8Lm=+پ.u-W?>j?3'&W r 2,pdfW,&ܣm2*vaA.yY{SaA5_3a B&DLЍTƘBFp.O$ .I FP/t;sqZ8OV:F}2SA孓wx6ȪvIPlت1 pA5ID+iݏDGLG?t5++yS jYwB=,JIWbfφb&nR5HƧYTU\o625{Ӥl i7]gy(6߫w{{)-9k  g83fgn/|.'u#?EF:'*\bs/ &yCگ徱 7 U˳<+JD>CWQ"Z=_lKQKoM,ԇȃ1dp!Ϋ8=N˽OѲ VVK7^ m s}.UuqWqS+JqWFT\7烥i%Tc3Mr&2;v WO%GF~C3buXɈ կ MzrژU)=9Dkkߓڶ'w=9!At㍡++LS [ws+) h]`)tc jԞ&-]!]))m]`!EWWt(%-]!]i +XsbW7&vh;]!JeZ:C2J$u+BR՝R-]!]AeF4A+CWmODWjǢWt`cNLW{UĝЪ~(M:jR-]ZTSEL kB7Vu+tBZ:C*&<{v] (n;u;f˙tsӄVo(CRtZi-Y V3݀pnn@JQ!V7|61tpYc >vt(h Jʈn]!`%CWЦ֝%-]!]I#mTµ "Zjh;HWe1tpuct%Wtu>tIi]ieeĽ\vi_y5ѕj.5]";V˜r낿*ԓC4'p5-}OQ ΰ''(5i]!`-CW=՞Բ3+IM`۠89U)thOu>( !9ҕb&Be?_u *YwBf[:c4 ͉]!\KBWCǮ-]!]iR ˓o\ӘUԝlrZ_Lus#\ۘA@X Qv矇̎Eo:~v3]XSzJ~hVҰMd0{ЕiТaS&CWWҦ5+ Wؖ·PB\7u+DiuKWHWyRfE'DeΰHSآ +!,B*ĂI /Ԛʖ5[Csb2*Bv9SLbvCf* Å$9$KeS!JZsjG]Cm ]!\ΛBWVԤ3+)At +ۜ7դt(6˼BWJr4K }#Z^7:GҸNFmN jҘ et(y;,JYSa3 CoЛ\xn 7Kpez_0N.W>|e[rtܫOomGsͅ`2\\y(=/24dH~uW(  4߾k|׺ ]_t/v1լ-ǣ^Ί|蠙+Zk΂pQ{t2gDpΉi{hf,e!O\dO~AUf =%:PF3tz^?ߌ"!. B W^_\S0rӋe?I|?ͫ7~rIⳇY>lL0jXC'T^?RHm7W)|12U)-h6e;-R^}Qf˛{ҼXf0MUv3(;tnu7KFjC-ZptZv/M>%RmzU~q8о"1𠛤<0?ɡNrI$~!:2DEp&m1A. *EIЙӔx`eӪ R\]Kzqm1oᅴ7r}u*3XK(aWiM9KV"N2>/7~V66?3nj3o~m2 [䝇'A-Y5ͬpca:V=/|I)pvҌݬ~}ПY?,7 >_s׈}4H쳃w L7Z >źK&Wy}z{|.7gl8n6oQ[Lj8|:tKR*0{%-Wt^t5&Ktk({>ÛJ&iR}ۇ4F$[]A. 4>@z.Y:mW@&3.]A;\"& ކh6M\.JA,;rRWxa[]]oKCP^_m0Su^>^!"%% Zo)qưdY u"DYE5I(Mi*\uVxf(q.]֥H ).BF" ^_&`[Չt4] e*-%C)wx%ffgd$gY xhy9rm|}Ȋ$qN.קSvCr!, >2(Fq?hZ4D҇jI*I]o2^e\^/w@Pm.3O>"Am٠w^`hu)ݛ9@*Oosq26iSa9 2 f2Hu`"$Z)o0^z(ւBU 1)*#4QdIVAҌ謏 } F9ᔕxqd)x48_s^4)h*v8 ,B6k붫 q]h<څԲU"CW=Skg~0ڝx*+-lVbt]08^UĴzDzSKqL|wkaM"Z~!#8K@B?<<(XBq,FBC5u[מC&e%nqe7E!ER'L Y& Bbr4ɬbN O7HH b,dBdw!nk$he}FGlZo6)l0p,+eO3n5Gh" :X6EϬ;1 9ƣVbESA"xV0RP3  D:uS)J q+"/:|0@?9i8hzq{NC2"K6-p%YmyeMZY.gX#%NY}> a+[]K'LЖ^+ja)nL2Melڼ[Λ)6ɋy1}ڔ4D#Ss|y\l4b1 mQ_0w"4sT`.i8%$]R`퉚փy^7?_6^-(>YQ sht>]슃&Y}/v*vΞٮnnev:Uޒ46FbY'@O gnn*^םJ&M?-4ti03_NzA'|0lzqioJn)}ϯ'gp ..+K2J9q21\-csu.{L#8xY{e{űGG3\( J$'!Nm䡏s +1WRQY^u<\ա/;)e`eIBD\ ]6 Tʚ(3D# "n%BWkKP ۃ Uzh?l$rSUuCGͶ SZbW~ѧɸpU+r֧* PnAcOq:?u@T )Dw 0it);"56^ETևg³7 Y2ǟ_ ?5B\aov-Fhݝy[>cnorrZڰ(%MQ#w L%u"ܑ' /AK6:';9}Cr=xC! sQh4ғRede nM5x+b2]K~=of{xE,Yjczd; ٧xKb,sRT!Pk\tG⑦8$m)>]ro7bFK6: =o>o.n{^hWi|<-gwV/Ohj 5bI+-1ӌO>DQh4&D\fVB]t<0'yf{Ҏc\nqoz+]@)@Q23(-Ly\nP {8b6 O]|k;C nwv,~ơm[m;4*wHDVMO2.B(SQ I>gr51\Vdk5l2.34= K0^G0'E&tJ=A !P:Zf]R'&]!Okiʼd:bV NjGO^Qɓw)KDf'+Kl=S)ynhzўw2JgVuhCgo? $ԝ@Qnn񿃐-6xQ[tKhZiƼȊ衲()F|!M` $KV{3~w3~H.n28躜 F)Ǹ2&/<Q{rv~-u.vXrφ/W˿ug(M_]ӓ7ߕKn~~[znO\ޒ>rq?p |Ty=׾j<}M/>SC ՠE{ /g^·xvA e'%ڵ\L.W.-nn|suHV'\qdφ\F>Իwlbjv9]N_|<NŔozv=Yjƾ){ʸi=.G}y4 ?(UMՋs釪\^l"?5nwݰ{~DL]>QU$C o5^hyFmʭ*Yǿo4 (-d*556"F@ [7Eqlq B"5mp]230SSGx{$j(_gݩxz=I`"P"xXxt5n27܌)osbwiŽ=-(sa};ϝv+ц!iFnxͱ~x:TA3Z$ D̂aF+BFfx^J؞׉;qXO;㾼ȜZb9֙\p!,?Yf YX^I71; jz׊+3Se`[W_=TFz$/hvVuFy(i篝PO$?҉d-OhmX'S;?Eڕ-Z[}Ek)L[,%SMʌ03B m)ki*k!U`us.9} PJ {| 9vUl|ykӇ[Mg̾h[slzv yxzkO RG m 6'P cdN $H}L5)j%rbx%yӹ3~[[$ܐkUlH')F2njNKiToTJGF8^y#yi79&g9Ӛb]ʙ&:{[{n{栲R9I,l`&@mu0$&r,eB˛Ȳ+o2+Թ899Ls ") v2Ĺ}5d4ĮJ_t[D9/\ 4D8E 11Ysl MJ&~w@^S)hj>K<`5QƤ&t V'i =7-xӪB`:_tZ]uqLd1cA3ym! s l!<2b =L:r7*PCF6KΑX^b,CJOAf0mLwDmAKN Єrk`4)k@vƃ+Ko"Tǫ +m#I y0vݍ3yjˢ7xCdQ*Ze@T,fEVFF|%R{W[xrrM|6vfu'񏋋IK@6[5pE{Ke; MHi:;,2N1bh, $khغ͐g7g]7gQ7gG7gA"ǒPTwFTC(d2 $VV$#)YV`9bRdj$Həv%xџ6tGR[o7駞@+|I`d OO>QEH?L%E<;ݨq R(.fU,JRedKkf`97j~=i; k ܯ>%!3>9ɧ:Q2|1O^ezQT;R,>]_~~*AJ]ZGDOR($^L!t<%@ʨ1Aحd֔J=+ J`ѓVT\6-IK^JEZ2vFhadag,4B=£\~_iϘbnh+99F" fA)$x2NnNX_C5|!BMQ֔ d vH:thl3rK0L+&f_vgܱԶKm;HnxLHJ]*+ 6(m6 kv[U(m| w[Ȍ %15!#(C**Fu6w,;#gxfg%\ծ&sAŚש2.[L5o;$8pR݈;F2qRceԆ]>]E'٥pE6wݣD̻7b" : :j)8mҦ(t r42%y;ټs֣+5ٴ[פ޲ :29P-K|;7Od߰7~7~!èOԐaT:ׯ А{7VlUԖ|bmR!aP )Y'BϪ'FIK +$@%P4e`جk&DʾVƛL'UMF) 5(u*d6s̡ȐC@V^5%%䎱I̮Ag*5eѿ޷6Z꜕NնVݾNΦPPUN W_Ϊr"'g؆҂45|]djljDT`,#>cy΃hi3^a<ͿeƇUP0OMl{j1>h1N{wb~nAүl}8፿g(w^-ٙj I& 2TDuT/-)f+PڷlH~.{`BG<_zzɮ;YuP䠒GtPb_rsQmkYoD֎CwwCFҦc޺mW/o-osە[W׻ݛy92f;"vuέ~1y5v#?L ->JfλNnm[)ߙiK\Lq6b>xy:%3 <5*N>J-b*gx0Z({s&m(eMe ?'cBe[~Q31L2h4^"/&2/{\/+i4VIuz_?`b?|11krbLoM*QHalՆU=ߌoϯ̊Oeݿ=qhU:aX.Vv\eU2^MT5AP!IJMO.eץ`#˥Y %5VFo0JX4\mOŢZwBsh 8W+^x'Tm7眕ySl͐l RK 7}3g&~87ߑp#!_" t獆ģ:Fӕ`}:\{2'VJw1]D3d SBL0W\4"*J;ɟRVXwJJY֬#\"y*⊩U{#Rqŕ?8!q+NF\Urd|vZ꽸Tj3W 7K1~]ڔyO-Ö5ͫyu-$܎mQGj &ak@5mt^*/5/. ǽ8`R)q*C'k1^gz`.-ck;za)!< ə B$Ld[.|" `Ϟ%<<Hmۮ9*2c[4xBN1 OUjIU*سD`֬Z}.aK8P(NS:>k2Q 0y'3(eӥHE&V͒_q_AqEu̺e(_XCfy__j44ɱj MPor۴y ]\7 E"4hD"P^X{ƧGP N o-d%p $NK#r͚2Q_C$ *DQ*b+$uTeZ `6K;#g{9RO=vJ'L3v(5@ʔ##ˠLt.+<~+BY67;; g`#kmBTu9R)^R_x|cG)Ż'-C*PvӖ>XH%,JIFlP@QJrS($* 2ːHG]HI՞v DNKP3rC\!wW $2;W>JCx>&n{Pufsf[1^}"jOeל:Ͽ C$ؼYd«FQdV0&Qt*=ŞYY)8R0AX+SBfm"Dt5nEY4H ()%H`Hb\OJL40do1 Bz_vFv JI𱹚#@vVWoXiĶ-\4vXnUأJ;Fql>tG;s7D!E!WwKk$ӓbsVl| I{iJY%Zc*(CN`!kal GoS||$9qv9ߘŨ;hL''T>lN\8|2ZzsJ<N(mOY͌B2hkKhHElSq2e@!{wPףPѹyk&5bRd wST(AcdQe US:=:DT>g 2al!?.RۀN3,B2NDO;#gGRO'+ʯRrV)g=x?tGlh\?>ߚe5Jg9.'E%O(XkALDasъ9V^ ;j{t:E0:1 K$`)JTc쬑Ȟ7B֨0ǖj\l-آCfbV*hXPiJ$ ,%:u? $xc1z`'aZkRM;-yQ6T̨D W"mR.*2P3a#uڴ*EZ][oG+y Cb o8"bvf"$e[ O̐(uJ#klꞪꯪʣKmesJ))G@ bN9h55-1)w%+urҗ1ukɿ־Zm'Ё;Cd@s<8d_||K#wvV}Q |E>,UgSg-u.OqlFy+y(xBHrRk5q7J&7れއGqRA꼝BFѧx>AJU•q[_^EV?*t^4bd4W_2IVp8-$?\@*>\QzP5ϧkg˓MpUS:I^솃mٲ<({}ҎGAHH5$Nah8&C1 q2-u`^żO'B/.&NGedI68WT/oJBt^=[J|puDt0 wv.R'F_뛲[Ώ}}9 ?>Gr¶*]h]gY0 h:?]uWukEUa 9GNPH?闟^~ׯw(3^z//q/e9\# $~/Wn1nkho547Co-̺^+ۜrøoUls\JbA 4c!1F%ne'уI5{vg2rЈ,<'h7E4KN^he(ho'=a\Rza ;øPȼ)"t2s!\$n' JFX?i\gg{ocm} %_aJ,bí6;:s Trc֤sXG>J!Z&tgQG ̏>h)'qpt=P6\^1БȸxF$ -"GRHKǿl>! 1Z zKi ij֦IiNJE H<ʅ!j=߈D/,P+(O8C!!ҷ* Lgq( !J9(M\ c|yjY@Z#҇tSXyehgP40"=)9!}0SEUTd"Uragn(☐/EvTvؤɒsv( zCqM ƻm 4Fl1H߈;f(qGr ˭:|4iJztRZ㥴x2)eR@Iјw$PH&`( aZLP!*kn)[U5tqICw  E/=4O2F,)l}Q2!#AL4vm?8b~uFfNh%B\Q00B =pSt~O ^$((#,jIĤdu X+Shp/N3Gk"Nοs ]v'Pjs2<p,~㓇~eLKE<Ȣ<jfh2-)#E=d90Qn@v_yKQ3VEK(D.D+=E̤id+(ThY[#z މ9O5wEǮ-G,L>`t‰T=*Jg*ﳻ+٫g)?T-rKqs n/偄hb !XM*D@n/qEjI{Rkо;'AVPˆso:7>9%3`|pR|n>|Gn]lj74*E{MJOㅞ7gȗ߸ArgNs┨WZ*,i-2(#:*{ӎΗ'W=<䱁7 d0 iO >\OsauzÌAm]2ncy<gyF:6#k'/+N#X sB"34@$ƒh Y#P~D:#^ 2{Sp`:nڶFza^[^jCg]?(S).uk`Iy(gE) h缎&qFTcg5i z5~{5~]IZMe`X pӵ)Pf?cд>.7N%E p98 /6˿U3l/}\bU~L~/֏_bu C9|?~a? 8~>VϾn2ļGMrqpH1uŕ~Yw歖x9Ǎϼӳ vӳƣu7_WI12Q)jzͶ !F`#2l<[UNGbʃ8/ȫ*f1ƺi.}td.z/xUm/ʭ"*ټ>t2h (E&+?\0ΝNdC\tӨ`jW,bͲnھ>.aƔ[x%mTM;DdqN.Hw ŀʡVyh5$o-piQf:^9Bp0"}]X&LP߾4)]mp .b:\e*QQZ_sVʚ$~ڸM<ʚxĿg%m\4˘`c-YT=ou5e=٭fU,PLkm%jh7]ɏQ}fO@~9C7Iul2ڪ>My{f1ɁI}wCPT@5&kp3]1IQFU![];Jtm̈́1E&fQv}c83ʇ\Xvqki׿aDR zdPʶDh}Iڊ>n#Ł ]D)hMF;='F|B%O82vNp U^jK-Ĝ=ID(c$G"?.I4DSPՂ ,kEecѩ#m޲m<} ;cH1e(el3 (2 &EזSu}j" Fci #UsN;҂u(+뫹#!!Dv"1-bZ3p:HAXJD v'b ē\Q)o!z3\Hq91BIm;5e[\Ub6|W>Z:<Q^o&YpkF)m`4@'IE:M|K6@^R#)J qf5P33$Q%;b2rLJ p1= *6q J6!9c$GRI)1g $ v@kiƾX([BcS,g7ėT'0{ŷ͆E˯f8+G50#gTqj` 3 hF PLG5:kHƞQk % 6GY0/2mrX"enMm#bbv[ӎ}Q[ڪGnxmh4P,"[6\,KQ`4 9nȅ4*e`!3 hGpCAQNq=r`TiLxؚ84 Ǿ[FD#b9`+0f.ŐEE:ؔ(Hi`Zad`$XGKFbS0X҄Ij$Ok;R5q#.]]ZlMKEJEJz\qq 0;%jeR!ƾP9Kt klGάw!kh>mFYe,ȭg l3lsIFQ:={?ft|T@2Sk\>4[97:zc#}Ό:"VB9gHsΐyX]hs!r98X#-%w,*~`k d{} i?aR/1",DD2""&ZH0<)c"-iuCsM43"ȃeBap I2bm˼I$c0眏)Ӫ N%mM'4>Qˋq3o`'XwotѥPUX4jM^槊%K7—$lZU1 .ʟ/*#*5y)~*p?x 1EGɵ~ߟ0i%=äK9/UM DAtP[r\gab,LwHx8&ݿEL pZ7]V%/^˻y8[|-a^6s\OաU_g~.W7٬.60^ ?X'"^mY^]fٷY@pyS/z\$u 1t2-,i/9v3|%gS4a#-IKa˭| }>#4p340X!F(Gzhe#)@0;WHCsKFT*jx>7:h'k?4Se `>hmKa肰kQň}J3|bƢ/J)/irR Cg{e༬K\qNw~:wK!u3j*#EF[-BF.V1B8%O%umeCc)K7D69]7 9k,vy 7 -TQ(v Fc5*Qs@IkWZ-=t,{"L+{Ҿqg= HaGHc=\YB x"6r\V6) APɞWR44[TZ8,|6G .؈mEjBH63>mCJ o+i}y"kF(PI!{_Ka< `IEC{_LOPivz:E\QLfj|D"Q`q #hAyO?Gl dƎ1_7n<ʖF PN9r[\~cg#/!WO5j܇h@9Ү#7Hڤ()t.`.E)Ŀկ#ꚡ,1zPw2K`6:$S'T~vYT6m9uZ\ϲ->+Ϗ=m|[J>v캽E+0tzO$fsש h\\_>ɐCZϟZU[:]m}WkmݽMQԯoQg7[>.:SYwj{~sOjB5?okuͱ6\}~Uq:L2+mɄ|A`'qݞ/yvMG}AJeK}ɉ.Zރ;Eg 3vbqpT~/EU]ʑk6mrfӁ[1[|PtZ1E4ijTl;/h.+>&Waް T_@隿 aX\Jr&M$4eKREJD̕Iur1[g5%[\gVBWB3~5׈f@_.gUS RX^ kDDa0˫~n8xLN4oH곉Ai)] IJzyCs TgW iDչHKpCfAJ*EWo83+Xh~6pJr62IZU$=\A8>#WUs$]$FWo$$>I .gv=6WIJzzpKog'eLM̴Hj 'K/Rj|<uO0҇Šg_>5H"3Fxn*9 /b70p\!FW,Kȋl]tGȐ(&2XIcKk *RDQDrz Їo-J~XMi{7|?x9$/3A0ɵ r B($<3`LLj9+j$O^v`F{iy5s=~~N bO_jBfegc.f(fM% [\Rս-Eyd&"7C0po[0C9-r͙A6ƒEiC)RJHO䑞:#=YG$!ʉYQ5Dc!($$!28j4< sL(QmGs+:2,"h#aREb?D hl;O\klqjB4sC@㞺ў.'Z'R{!KKYԧEA1~*3yw&Z;@g!X:gĨs棓&"qwNu<%Zܽؑ %KTq':b"%!GK!LY"a̴m hXFYm Z ha[0 ^2lMT.1@MLhy]9yܧ\FՂ~c91<%oJ ܧQ94ܜFeT$/W%G N2N>Q%iNbwN"+=s怾՟[tN@=='sDvnYdĆY" i:m) 4m) 7}Vj?(!BK50&W0_4;k}zsYLF+;^:k9SBҊRӲ(y-{C0 gf"}V ,XH 5{ynL}^2'ittin1QmDnm]5v] 8 !z  ѹ ~@LPCk6!X VFJn$@Xh`4qNkzvy9)uiIc8y|a4oyy~9?,tFqR4>#K)Imi4,*{D8 "$]ո^ \[G"`0"\0NDZ42U4a@b!~տGj.>ݡ*&uZl,Q*]T.O <)i^i*Hv&Fs,"%Rr(B9a Bxtv{iy^^ E &Zy$2)Im#Qzچ'(0̓/ A#W/g?ꕱ_(F$(E)Rq# 9$XCzo `6~HxFJz~?=XשEØݕA&ԳHQۖnp^ww![Kmxu]k7La)Iz2=bcfMRUq r<</TCbwSWʹlǫږPKeU/1լ<ŧYcH;OZ7&}NDs?eRh]$Uqd^p dLf9\9oЌ\- bDJÐOkҝZ_'ְn9zoXwc#r6&}ɚt_JFul5&y%A1 jndZ~'XoAEe=?U}e{n]KL!.o+7mr)Sn7d}zNZsnr}n6ؽMWwh:߆9[/j񭘅;_!^GYqd:W6W7HTɫ.[{fbKmsL_j4xz%nxdh(Tڮދ\[:6~[&RU]$hm|aųu%$Dk.ҾkJo)kC2Km4crK-W2s.䢓FerxY/:ˤ_/SKz-8%9~&X``SaR0?Q #Zk0ʰGM1 vV@/i7yfѯt"|kv\_z9u7)w_@YW:ya籪˩q>eb5Lcpzno/7 wQn 2-DqȩȥT!&Zs8Gq<؁^2ҍ[PehٿgH2{OI{WGB'@w !_ l/Npd` "3隞oguU{mXbC˛j)R[] >NBxn3zdB%'仈 tIN`*ѤR`$8;vKc*`V+ޅy Fh'Yuf}m_}Fc Ctwڵ*bC]{&ݤi+vۤ hW WpwƷ5v}V)XKQ $uO'eFfGY EAU4Uc̄(5[,I" <)5,)@XRcȹϰԦZZdbxBɈIMΧ׸Ӌę͋\}9u+k^˵;s2:[e?k$nJp罔BN3&0 dɒ=df-2Gt`s^^M#Mj@ntW^epd]^$= Xen_q_*CBp,{xaS.%EA"}2*)e"Gei˅ @V:Aؿk;\Yw6eLXֿHm B RG m1cd&̺T+cL\na4:V/۹}_,c M8pR:ӕބI p4*wxtM͔xKO)(-ksD.X>6,tQLnTS\Uu^G-ӇɈFʆꀾa3.8Y#%?i88<=-+]>2#s&hKp5iX.hBFm*p|VܼY&>aj|3O]=^4hz9K4tzC-F^.] ].]{0w 4,l>'[qL&7 okO4//5؅~XLrl]y7Gl󨋇\7g%xf% J}2v3^)C~Iy_ySq(ťn+O,ϋEW .!KMo,%8ǩI44GiCosSG5ĩq4O7qᘄO۟x˳O~z x} ?{9H`- $|/ O_|ж4bh C_z>wW|5ƸF\7N!`@zV1z֛\_'I(4JRY`NZaQ29+,CA{7*Q_~8,KIoC :,k&TQ ɍ4m%RC "\Q!r1~]ʺΠj%ĖWרkJlkwnJun`_d.qU'0K)lk#W2.KF#E!PH96r5NV!r SR "M^Yi('YhSV!:3sL7sԦĎ%vIJ'G#$,:FՑuN Ȥ a${{(W=c$9YpI`̘, .$, RgA SXEuRdAlH{QƜ1:*|hLrkȀD]&Zh4 Ǥ"~ qoJ eH$c)9-eA$d\~i+-mFNQAN\ ] :#hx,wy[ߝ 9i&[ZB1  FiMnD \{%PU(qmנfFK4E9Z1t<($cUP`9C, 5EX!E 񋪴{?t,#$L2$A!]2.,r *eM 7EPJR(IJ֡X%7Ye!Xyb!X'!sȹni!C=<,o2%SkCߎ?fxCn(.7刭{9C ۈxL&vwapT 9I)(fZrad LVnasGS~O= ?}T N *ƊY$pA&M.6^%#䩬*Y&Fcu,=7=sĘZ~v_swX|Nkۓ9Gm:CXTĝ\&%5ZHԑg0$%xRU fAǹqsRy@r@2D='Fz,ȹ]W@Y^<{Tts_StbY>gMef;j. $R'F?oI+g:Sz+9+>8K ګ@RY)JS]fQG y=GHI+|I+&rN }g4TK5O4x$:s*+'LH(E1 ^H"v ar2n҃,rKlQP@1)Ys|&? {IK#,R&%ʘԄcj@ٓo#OYۛ@7f&"BmY-[fI>Ng҉[jkR\)cA%Dـt/q8b* jI΅ R(R&j/cG}yS l: ; x)iߍiGr1q; &CR)-9F%+\|;. P9#f-^R5vs(Zwlֆ#؍ŸDT`dR3Ahd|65b6$e!2䊼(ђI4( )"9h}bjܮ;dMG bFjDWY#Fkč4iXjȲsd$b99tơ@p l XOUՈ8cƖ0Ir虐B<4JɓJoxbJjaKcWH/ڹp )YKԋR^/nh1${J\ 06,:( ,s)4\O*N`❵c[}*-w–[ bv5M,g4IpLJ~,݋>4#:O'Ӌ{̵9/'$_,)ޏ\˿zsâ^T &/3v䢣0IQtu'gF2̾GG}~It+ʠ`HKRRjxyH:HxO<5#>3<#ei9J{Hu$H`jeQ>k=@f `hn)16}^e6}^"3G} i &D0A}raZfHDE',Y"ece.DpCLDl,,xGI4 un{8XfCw7 ;?J1΋wk44{uO'r%rB`:Qe{B\ȥ*FHp䩤߼lz˚z4e~&[W #uhbW45o`D)N(܁F yΨ9HIkWZ5gl'-+w֓:,pW`Ϥ"ME!ԭrdO+)vz-FJ*->pl6X"!X$T3%ؕ߾w.~"TRe=v [nJnӛ0)<NObb'SŊHcdNHE#QV*XG>]ETGme"\fT决RIPO94n\]Pz)jpU^>Ddͽz}KL2 %#,1?ʗ`vglu?HvN8 l嵳2:]?_[.gi|*=GԾn|[+Kz!v캻I+ưuzO$fsש[m4.-thȡuyzs0bK6>7fa Vsn'=/Iue07|b̻NVg01Fjڜ57Mwζٹy5P_T\oi-mGQ&9C@o'7K[ M2|Jޯ0PIxC fqoF\%jHTrыQ\ 0/l0LNQYaFގd%G &%?fQ0%Z)6`~x2J(èJhV|$B7Yq;L y_0 z߿)EKpa9C3uZ+nq_[q2={W˚7,Ku,_.Vv9_eKuwY̙C:jS<%$VR*/Y(٪Fyi]_d NYX ӑ N/O/ɦY*) F.8vJcǭ`!,L\Ydat="J8{=g erX'HIǰ7N&%DQp*v(%"g(hU@XaP`VӠ>ړ=29 /!ep9+&ȹ#0^y8Jz8rǹ#J4]#?&-&) Y9Mbp" "xf%F=:<;q'xzj7 ;4!(僧i@660DeJ4*/(@$E|v[͑sL(QmGsN%T1Z)"V_"쌜$t"O aa'kC7eՅl]ΧࣇǁaBm%Ƴ.ҡ\0$rPaha93KT1? O~8%,`,|*R0!j<4yɤO|>NtPtUƿιnV͉~vd GpaVo^51:.PXG)h RP@LuzQ aLj2Nw$F1ZleF͝QFGS%=IwTѡ_wFg}8n rٛ_}О7fy$͒jrҧ !ߌ~"8'j);u'D%㽓Wrڹ!l5m<-_yX졋:kϑD:g R*97{%N%NI 0L[m'?ݛ88f 7"Jb,]yj qGxS4Tzn)8@g#hp`7-'fvPv'tڦr3 ),G`g,Bx(4j'BT{7F"$^ٸ `8`0rm%Hp8jmXJ˜V)m}-TzX/;Ғ>שZlO)NQVuYyMjgbNH1{+R(%Wr/ tO7g<^^|qG{l{%Ltpe`GRQ{=F$n0bADDВ8?x^KJ`AR(3Hcb0 ڐB0X,5^x]0eB=+8$0g5N`|w";K]FO_$3!zȤ#^`FɌ:TNHz)Ӌ}uNV7D%7 6xz}~;EՖh~&[Qy~uxO&be>T3YJ ΒTIGq8<ˌU]f?1OPV7&Ua6}< h%&Mp䆡f!V9kj\dIwEH{HhT@`Z)xkfj̾O_@JÐ]tg&@s{*}:mWmŵoWSS<O mv*jU%|sS2ߏ4óIe-V[w\x#m`c|f"nx`hM= کA{L[QI]v.)Ci"mMn @K&DD6ަM|K\sl2Km4cs˕:✋$!29:qbLeB@>h4E3c= ÜH- (Gt',,h%C( 5 ӽD%t:/~Bת?oюF0'SwҐ!e:$|sGR=Ԟԧ8R % '3/צ ߨ؂dQ CTRÉs-9A` <Mo{ƾbƽLѳCoQٛ ~_I]vp-z-WR{k+v)cEܭ >ZYK0X`"k"݅X5*%> Gg;#gGa.MOj+/x(B4eіT-sq8=W"+ݧw,Jy\G`#'XC)aVI$E5^yBmsp˗xnx_xt]]ב HitL*PS%GaQi8u^ vD0<ۭ d(`0 U ;f)QTZ&g(UM7rbaDrDTDgt7 J .q7w-D;$qBHۍ18) @_I (&pS 8e8tDҨ =6)vgnko3MeX }edsݏq+,+㛲)x@d'cF؜]$"婱D؁Q$MML# =a'd)R&qvEy2 Mٵwv鬘; Cן5?@0! H %, 嚫9L]Uˈ0%1yl|Fci: L3mnR`r/Co_~W߽wmJ~هMǼ"gwd0Hv_6E8c[>= %2[v{QZ*~U?y.7.9tFEAxn5oykío̪=XjOzҸ49 fQia௷2gZ2-10D9VN›s~'#[P&_6W}t/ul x@QPRЅG;]Ado&[iu> FyKxle N2 ^xeJ{ISZ Ǩ&D hV:6_Sjm|̓ih|>T;Եcz2}6 wM6w|Qek ax\τ!t3 ȭ } 0kD{vDZO@eVHk@2t[NísΨ~ 6]>[׬>Wx߻z-_-gqw=lچ~Wݦ6.bu5hyZ, Dt5;w`'UUW;i ػQU'<'WV>I-%YLgex(?MnpӶB~'):H4(xDA'D)d &%&SR4 5 '%e/Y!` Աdic(J<)WQ҂CIE9lMQ(09SK%@"ԅd(5+?j]T}&Ύ+v[MV=Oq?NZ)'#%CB 2R dVĤ8hIRb`ͩWE W$YZրS656T+SlMRStN`YQ{~Eo؋γ+`8y] /gڂ ]o{G>sh6dkv\/9.ĸZ=|Z8@oZDR*EfLRS۝* 7ۙH1!k^)'UMH*zg?ud8o5XgodO{.Nj]#v/6 #*ڃ4"mPu") ;]<]Lѱ= 0aך_vܩC$v1CܘDNڎ:fq {%=l0$M@!|T٫JJP95V: T#\6 lDprqPATPQ`mӓ}@8AHrgE| G2rFF&~Q XMȤd@7qvX__LOO҇ Yy\e(f:d/ .FBv G6gI>x2M E+ 1Ea#*6љRJKUgT_PR6eØ$*zTE ILB>!Bǔro9↩c($gFU呁fhc*Ad! HN-Fj4(;K֭rvk~?b55q1ؠ@ZJX,u}>7zP(O ݄\"H%<9GjyUe>jk!J啗נtPMDr$}X2;àEiOKZ0s?b8jZ =Pޯ-YUw[?o'q;6|t٣wK:3ko'Q"M5zA[ƺb%"(dN 6 }HӴȆߌƠt{n[tk<㝱dO=x0B`c1 h [P` Aw ::CISH* RhJKd=9Z`\,&TBeQѐ׹BX"kk&JMR613F)"c2(HOUUm&^fBbs9[K ~tՂ'J8o,-'E;{O}=GWofZr!F)G F$# Eێzbsl'?3yex-|r{b5wqH\ UU]l.Buk_^Tm&D|tG_n__o /~tgz+8rxM[ܻ}w;Rp7ݍSoYV_ /u~C25y Y:>ԘUQ#mOi3 V= ukbHlɴ|!jdGRC`EW{mk6Y7% pn&0ÏPZ'ުⲎw\G#C}} E`ܧGk$S'AjRTF)5b#lϭ[u՗8]P3̼@g"`{=(4ŤgY$:|T7| _njڨ( E^Uoj* V:Q@T@IpJkьIIۨ7Y, TJW: wټpwiێwmҽ^3/ uOGKV-}Uzwyц26dF3 eѾo(c6 eh܈ Q]n ]rQ> +ocRW ]1\c+Fki] 7BU~,thޫ+FiՁR[N:Ƽo|\Kyah3 N+RuꥵZ+X hI}+FKz9t{]x޵}rJ:uu74]Y9G;LG^_ý nt#7|K-$=nr,s|W˟N/.rcV}=WBe |o~wd#f:\5ؒjA1 Oi,hIҾOF!s7SYa񆮥+O]g?L$v-F|05U4'仿~j2?ZZ`&g3<E4q\>>ҹѴhhѬ{OӌRh%49]10U r4thwbNҕB?th:9Z#<IWkg͈ r+KhwbgЕN1]+X>Qztp!jst'滺Te_ɟΙrr݄88zͭ:XcTo~+ 2ߔef}%?L[x"/껋F]X8E2Ώ}[9~ߜ{04~d6{0w}[Gjߜrvok`P_G6F_Vs ||e:_Y?z8|8'2?}A'݅Kw +Wg>CL[//_|b(ڃ++CKyJ釡{v]U?ևRzש҈ޝPn^lSɆѱ8u(ȜSUSҢL34ޫhjPt9nk柦0cSc,Y"4_ZNi4xtoͻG;S7pشyx}\R1WΨOgl}dɅ&ZkH,`iS~Bal%+"7]s !֦roƶ9ю? %oL'k c܏tPiEP`UJ= ex7 :pMw\Ȱ]z[;#bu{2PjpC0V!V | )-[Us" a[hw6' ^F־̜~U;;5dYe1hx^zJ 2*ne Όˏ@!Twwޝq׭:\'mE'+g0䴻;2]]se]ͷow[ JJheÈ3 {LpI:}< e:˹ΤOӳɖ?v0s􇾜@'\j13Kfv!"W| 1 β p'᥵KλZ=urJ` ':3AH-~N!pJTrۿM^t^`%GRX'FDW ]1\c+FԾܷ,te$H?]1`?1NW]@Y3z ]Wh);]1Jkt)՘ve~<ȍ{u(Á^"]y7#+vdGCW ׌f4(?jtBWAYǴJ͕AFKՋ+2Ži#+kGh{u(8K[ UaAjoịR>f8٬rMǺx,|~I羝շp?1+Z(OkBWӣDCgb՛Aw8/[;^q6:Wn:XK~ۍlu"I^XT"ӫwoqbUI-Z5-W}}nn>;y8\d1I|GX^}|Iw'z?:"[S@DQگPosfY8ݼ6w3̓&x;?@Yϟkfgeyc1l~?.DKFEVƷ*%YT6Z*2[Ӭf5-y# 0Kﻅvq7˾^gK|>==+kjQf!l%D hUזMFFa4I YLƪtt~_ *DW-JHN„2fE,B[2.TT.Z5D1z̝-1H#5m.\m!D Cq&D.ICd9 ֢Ph]Vx'/CSɘ TF I݊-Šc0ɺZ0!hz#swgPT|5crIJ[׌?f* Kbki{,u _X`467Cb6mmtʪ3eQ}h9(+k%[}qJ-k,C됔20v 0i7MJÛTwLa0͐7@p+&Qet,yrh.5Y`{x/dp A6X]F ?nG~?իY! |Re1Җt ,R4֧D!۪ |\&a{\ ArTeDM*1ڜ1<'*RI`Ef-H2{gaFin );Qƪ)L>"Ƒ֑~B iYuh*K\0C6eRHE$xHG6ՖLpRTD@ Z'i4/f$^fؐhVbP[bӮ,"7F L.9 /6` 0eM!(dGoUz$IW*KI!o0L%Dtȗ BѸ i:ސ!K| 0݊ NVh sV@k!N=4 /5h;z።  <(EJ<6%NA2 t̍ Kk#,+. ՐYuqZ=SISi8ez-XaҰ>k Ki) 1TUdq!{X EW F#7-ׁK`'z)Z wUTI`X$AHfی2"l`Ǒ$Ð`AV#%*(JLZ[NPV $_ *"+kzXEk(PB]A؁IBnTzCkNȸBLAAX[A( `5ePgBE@%2Y,5%9[@i`0bGP!!.A DUT8c@T z_e ` I&- p\APQ{Se*̙PHq 92rZj , q"KPA7/u+?O6]ueRs¯.5TuAd"f_\l-!iD^>gu$$KuZ"5ȼV"8J(! e9| A$z a2V7_+Op^Ƞ֙ /;tK++&("Y䤡bL:Y!D!N`">`FYvV <{mV,Wi[2ny m|k!00xՂKy@82*}tC^^J.G[ W20)(yE n=XP|Ѡ`QB'!H)DN+̫2 䓢Z0z0GT/> K*BzP:x3$nCf6s*Yȩ ՏDE}3rmE5Yr&e5Dw <}=X 7g3lIy\E4Χ,},%VX(;Kb fv9(P"Q E݅ZB @zik ) x0E=`JAI9Qv9 ڱ-<  &vׁ%m>A5H3ZMV78t-n$ԏ+LG $‹2Б%Y5vփEQpAIbD" ^(M6ɀ!rP+ bEED֨%CAeI6wU&BPuDES6Pk.ǿ?zX+Z欬ipI+ j@e# os_id/ $ $2`PHJ-ށ[juҭx\gT?,/a%=RH:7hO:C,"NXa{tGH)5 Ⱦ~&ip=]y Q6gw Iտ էT&(3=Y(uqn~AC,bZٶw`_kתmY8*.~̛.ssW:^/^i:NeFpuFKrr{rv~xW5[Iߩܡ*U"$T.'=bnbVMUF(c $> H|@$> H|@$> H|@$> H|@$> H|@$> |@992ų5Qz/>ac̕$> H|@$> H|@$> H|@$> H|@$> H|@$> H|@EH;S1l5F;y@(}(FB<> wOςO5v>'5$> H|@$> H|@$> H|@$> H|@$> H|@3yXް6}@Y|@s!ƵZ|@$> H|@$> H|@$> H|@$> H|@$> H|@$>ٿ>GgԔJY\/^N7PE:[?B3ڬlK.-7F6%Z`⊷-w#9ؖ>zTR\p$F@u.KQjr?ۑj}rJnզY׬o"34yuFm# Ǣ*>͋ۤ)-“jHY\u8k6"O~k zᬥϼ\\vr~؛{_7ܧ;HAPŘzd1ϡoj~-2ҭT~i>-?Z"Tr^6ĶU|jۥP矾}XP]H@J#el>|;9<4nW|lNi%ZW)D sGi7Ј#YKpd3\~$-y˽lJlQڔ2l4p⢡DϽ2$j9fH\d#WѦC>QPqPrJP f.rEx"Df(WQEo#0+"\&";%Jtrʊ2'\.uIkt(]\e}4DWArڨMrEF9}=rzO+5\>ar57Nbmx+w( #ʋ\ݷ \"\Fֺ(KKEEYFrE^ n"WD+ܳ=\Hj|jˍ;=[]O[E{ܷU52iTW/O+~rN/WMЅo^mJqnhկoӂX=,>o;OmTMڔLL^<_WlX>CivM߫^rU㛌L{8>=]Z~b]C,*kmS:*PRеƹ11gBF:/YZwp5lӅxx5QiFOXoԏF&K39gzKO}d#WS\ N(<֛\yˌ Y:.rE>.WD%\2䊀e#WجB J+PG"WrH+Go66 IkU:EN3wEMtE+ FjrT;##"\oі 2,rU8ÑS^> j(G K "WzV\p6lJ r6rEx"JEf(W&(wU1ƄnoJS1c^1?/5hrLY91RqRlLbېRJTr7.*MSx02QkNA%O%(ܠ>\+LIjrc(aKQ\;Ɩ.WD ]?H\PLPa϶]I ܠCYFr9E' ri rè5{f2}OLcw>KZ 9RNJer6FwMXz&G!K&7L) yrՑ\n\ Y(̓QJ6p'|\h}D *h X>8񉮀UrE֗.WDE(Wte 6ruPS26(K[.r,rt։T;^+\֪(,\ec䊀Sd#WX'$Z[Qzjz*XHsW:'jG': }ޑEU!WIU~rģF :K:t]mmV)0b7hăqJ0ø9kg$W~%WQ:#r5C.xGF7mPt"J+ϼ(W+k8='p\h]D?|*z"d9Iw/G;-($s*slɕʫF8.rE^.WDYBg+p|tR-cdR~o2ڦRk|}(w1tz8Vq lՙK@67ea|KNHfE@{\=\[ʕ)H8nxHZW3/PfZz ƄIpGDurTyWy*#GNOt &*S|2H2w5KJVVsW|Ըhp/?*#&IkVg.'k{lHesиJY*ac4*"l}_5nF;m?x]41 oF.6h{j/NV62~^7ſ޼yHo]k+W'gH]t;hyTI5ueǗʷWŸy".VMOӗ7o1x݁|2?]秋~ֻw~XuVGj]tm&{}+ק'wO@dA]VSQ1tWו붿qs~ {2jUrn&qcM6u&}ׅrctu.wWm)Cf OBmvYo)_}W]ժ=޹#US uy\ړJhMt}=.fL4T;:u**aJ;15 V꽶9M}-h/]n#uӊhg/oyի3 ^/\oT[wӫo7xwOF]Pt|\]}RPU{u9F|Gu6G:# GZ.~Nοk@Ggt^u]7&Etթ1Mmuڶk$Q:%Mx./ޞ7{6ѿ{AV ZߞkmKy|G??F (nnk|hlAǏo^f uč;nZY}M:oS~ŦG7ͿٻFn$۶~nY >m-E-{_zj-ly fVqhJf +-LTrZnv*#}(3Gг}p+i*s9}c"ObyE?RHnļ߭ " T'K̲~$>Wʋ>>n`i.~rlri-v} ; jxqWUdyRްUPluuM7XH?!i1F: lp6HF_jRp[,} *ۉHa^ۂR]`jx< R7j!!ΒPdK(@gX|pEm \91znT>-`v@5}iـl]^%A`_-²^%\ۃK'9YK?OFbvRN?6_:)]i0ꊡ6i=Qr2Z^K%JR L RmMΧ/v("g`'P9  H`܏PP_pLC/r߿gǁ}i &D0A}r$kQňUX;)h%T$ ySSP&)qJ7Db΂wDðqd``N\@ya;vq>.@Ֆ] ds]·Z}‡B@:( Cu3jRDJB!.dRh#$8XTR7 ۠3Ɯ>jMn,_Fe09tB[|;7odPB6!*>N^ȅb7_`1VӨr9U:-rm"C-Y~AVW,\exzQ֑` {+Dx&m985AаƝ x$ dG;c MHk1hFs+%b`|89=bba0jpFl.REB8Eaj 3ؕt<|]!}pN 7?|~@Oiy6* \\ .P) yqLe6\V!za|bE12'$r ˑ(z+#Sq A7|Eۏ3vVY3, )(8aRSe `'ۻj&;PAC 'pcuTYF{e$HWoRTdђ1if8n:#Ed6IP'Pn ળ]?U\stĺ;ϲ ->Uzfxv50a9Y$sk߆f;ِfw}C I~Q.]ZezC;yכȽ3e.d:IFxVJP*ץWY*]9w4 [WoxԼ.ܭ7b6S_v/\xaϦ7W=߼CҷMp5EW7{]5P+ݓ?eG6owjm3z4&6I nr$r%~/MMJw^(vcNlxQq&dV>hC IUp;oB!gh՞2#,RFXp41"ukw 3ϷޟK{3lA=V2}#TPɠ*9dLVf 9f\%7ih 5(V8dvZ#E#i| PNdjR$('H  -DA !1SŃ?P1G EέRȰ, Ƹ#aREb?D h:8v0r>B~d..x^g{ٝ(aoN߃X/Ѿ^ V݋}(7/TJ\..q&_x 6R`}|tX[$Ή`ՙsK6xԈ(Q wq/ )'6!RCQ|(jʔ!bLk@HAhU)n[94U&{gqoR0#4'(xAPp0rCF/T!EByEm~O\3rnUԧؑƎ#m^\n\2^|^lLKb6SS\2$\~+n/Y53Rs=w"|i,=s K3<,RbìV[iRXipH4BQYJF͘!)Ru%ߗ,&̻wlX'?b#MɳvXg-'8UcJRZQaYV3X:xxO`zoqnv*b?f$k uMnPW ̛20I21^'ǙDLm2Q-6+Ü Zk5GV" uVD#tIvT@磝z q.6\ iX6T=i NdE b0"`VrB^I9O A#l7לꨢ!?&yoc:q|aԖۿc=spd)£@1Q;IU-m88GpxaQ1 ak(QXD ƉTkCRZ洊F0\HYl#C8D5zQew.4% yw#:Uq*(q8P7E)_K]q^{eS)gڙḵ$Jɕ E2F3-7~T=g<_^ E &Zy$t6NŸHj)0B6<dk$Dm(7^ꕱ_(F$(E阥FsI4 "m`bIڱ> }nC? "Yhhnb8O@xi%) x$F'q C&Z,Jfɠ"%HDxNhz)3clC`O:Sͪ-/?M >jC-v}9\Tkɫ֩P.oUx/0LV RŊYb/oxdCRRS8)IC [x6+$% FHMC&܃:dM*B -zR<'H~4]t*E0-rEtf5SS@>f3se( LC_Ф[3P+sXĊyh0VxqD^ơ#ųgYIec`qʺeL.hqk1a9Sb%V54W5pԐz}Yffx]LX!mWmсnU-Oc^/ªbJTVw4Ş1?wkOh;n6(>3rrqӥ/$},tyx3x= B·v=m`7`')xbQ<:)=YUe|?mjL[YIsL2-ma@.D>ѧtOt}Ct߸ZbnQGs $6:$p+Ww ?T{фy#q&Luy}5K \hs͍:,@dOS{D;IQ6I>[1@Ʉ3|OuQy+Ig+h$N`g)q1Viɠu$\KlB O0?seZⳀqy6o}0rz +mzF[ވ;JL yrvpÜh_37w7mlU^9옖,:*lG`č0QAR"Xc/a=DDYR! ,ÃUGv꺾 HitL*0RafXTVe]W!ʓH&nqϕNJe!Hb?~o/uH%(ʨ5oy"!q4 ÐRVJ"1^@:|E`-]Hߥ8; !!P4 ҰtL18) @_I (&pS 8e8tDҨ  F0峙][Ku)բ.pf&S_;|J̪MzK?cB9Jއ9ΆMu IdE\.?ln&(~W颫zͣ\A+O!hppVK"rk ,!a acգtVO5rS~Ջ[4g;a1Izd^솃ݲ<{cV i(auIgKn鶫݌ĝl >JG5'J>+@6'mV jɮV;]oGWeX],փe89M,pXp A ɢ`a?MȄCCo\~غJ%jxx$c%/߼?ͻ^}޽W޾m_x\rdsl"Vhu n[I]֧j^W| ۹yC!AlGge7g=/Y.q#? O(uFQY`-ӰLLL傡=fj'} \^{Ra3A$QY2- YLH2$u")ʎ:MuWٜr5{% fugw=tM⹀V;|J?r:Sv[<~dS/y'>~|avv/Jܩ?ZۆYĵ}NZeVU4'-E/ȒջLBF8뼭PM(]KϦlva.^<]ڶN9ذDz1iqtO_bydZ,+j2q8o0yEݨu&<'_iQߞCO(p]:]mr BIa;AuʸyFz͘s9Z 'p+YEΡ!J* 2+V|_HW!]NDFߴ]MĎ%v$gG$,X#ꙝuQ9NC@I& k&/scw(Hg]cQsko=*͍nmWr/KYG4EQ9/Z (O"01Ǧ&CIBDN8*?\2"3T]Y>PhLjs%+@R^縤*{-l)-kpy 6ߺa4+β}5NFhGvtO숬Мy? ?4ܛ7@!iJ*9v$SP HfVa/J\5/"a%gQq25Rz(4P)kJ"0JhQkGt>'Q*UR`Eb!X#aK Ĺ2BwgzHPmO+λ)ן{SoO|~(xe}cuݳǗU%%$C>%L. v$ꔏJK8F~O58D? 觏X LsSY|I˵Gy⋍WIDy*CiJ?D=Ӣ0[ta}yaĘ:|Z ;knNrrZ١(Z.:Zp 5)5耑w&;QyNGOޕ!J2J zVoĹZ/Y@g)ui}ngq{jGy_1zYʚܳd뛪 NvUٔrt{2؎ [2RH㮪 Gf8V<]U9ښ.)L' =l8(vi"#J'֑PLmFC*9&QC1)8!VfŜ"([Sf\M+)]5tg_USYHpߙ7Vx,:KWR dCXI%tҦW⡶Evf۝ou`4xԳ!z&2ZCN(D/kY8J%A&>ĕ}n6(Dfx"g e4VȢuMhRnW˖ wXϚsM=^a7}bRLT>=͒FX 3X@d%4d k|[+%(G[Ok@<{IbU |z( { +J#yiF 8_HA)eOHQxt0I##ώl1!I@Hl5В-5`(Y3M p j9@nu̴2.F Bk/s\"b"e4Xw;gVeLӊj_ ."²JyぞEM z842Q?#c^k7*B-6zS5hGv`c=W?[^jD柬d{J6AOw7~O u/|x;l:tIxO=SR`pIr@IrLIB,YجIC`:.PX׬e+kΊVBD}Sۀmֲ(QbDZ0,gIuuԉiY˭Ĺ=}/cA1uΣr،PT)K BJ4yhPr6f?_SQS)ƲHty@ !R1 *؇ev'VgJ) y!]eUwC=:01 -T|= $sX$EI$[/Ng N7-ZƢԱD R<3`Bƚ'b.dTdQ^5؀tl൸X=V9 ֺ+1xO@EߝN_~+ /jL{?-"C3I/)Jhv:4ٲEe]_479m9^L‰oWV߻׾<4@mWdЕ7ѬMgEE+e/ONIA1>I]M>䈯&3[EgOKQ?{o:}:<ݰ [e%/vʓ%x%^68C,7=+>rDe -cpB}[eO36#Ӻ'P#*;K+YZ'Jy4%p|/gq8.6ptdrY^(?:%& %ʥP"ǒ0wC(*@Й[ )ZQtN"MWm>g$A圂bRdi(AK=;n:mLkB\ Z|@_|cw f/Ji!(!G2 |"oe*P!&XԨUTRuR8.S^F$VbuSVPl+8ؾ^t>_"&fdM YIM [Wٝ~8%Z 5ͪ<'z`>Op7Uc6_M틿ߦ5PuvY% 4[e%&o$$šTyJUF b[d4dKJ%AڳT$COIT\%*Ey@+_Z#c3q0یkffl1  g;:xCɾ)1/,~_\G3E+$0CeҠ,b3bȢHӉEkΞOR`eM4n?Ko'\VΔZwj%Ոft j7ۢ65FmQ{d6D&Jj? IS:൑(}65YظV=df؋-cM%Eı6‡Ttl}Ĺ{&\q(l~l1"GD2ukqF],eg\@LY ˼Z *3spk5Rq&5/)F- /Cl8W#CRl6KEl\刋#.^y-T]T)AYk.4UMU #. .&jvl1n| P(m䵪~|{iVlW'mدGC,0Iܙd*]I֮7d*%1Y&k[+Q \UqNK`[ ~[u-\Q1;fVbiC+ҁ•3 \xઊUsbWC/ +Ս6n& W7=Iy*|97+9նC֠+Xj+pUu0tb)5!\I#8]qGd)X/FMSxZ,rL< y2MURVk~w3tM0GWOջ,'0~Zs]y:#,̲zUTxe_>1^tRͺ%NcHZMC l*}VRh?$@wҁNCm~M=[NOA1yu*"Z$Hk5lfdFĉsn,=PzgDy*]~<] Q9Њ:] /$т /G] -V@){zt#.iUCWn\®@힮^ ]iTunAt5Gpl@qjTHW!(KRW㖣wah_P4P*CWtggg_j\KWCya(Î+z]ўzk]] QCWKVBW]2^ ]9kbXrCWs|@)fOW/}w7şa|D<{f.:?=94.4.l}ܮt&WA2s/,qSYgvS;_Z+̟L䰿3)w= ŪbKΫ[1**!$-:N ÿo;Fgݬ(8E31^c'E1#;GdCQxp,WYrD9xS:Q>+Սj}Զ}up?ҢWa1&ΤZ\ڲ+\3sW Gr*'妿lܼ~W_"qo_?}~ln7s1z鿯 \]?V#U972iXW_䗻y!o{ zwX>d{k!7odE\\"nߘkpKYDî/" S4/rm4S4l9.\кzPK+qY]-\YhPC{ztE+vCW7K6ЮP٫HWaAtbj\]2=]@ Ī  ˹b\7Ɲ_jʸkOwHa0CWnXв]3ڰ3;Z~gg2C;t{zl-{2 C\ ]./Z]R힮^ ]9݂ 9b pRj%u(t;&3M%c O;&$-.b'}|A3|q gnޯ|$kU&{@ !;e6ߏ'5m\l/G#`y˻ ޾o^p?|ޣzFjT_&NmfkMo~_?/x|ܮQN>n@17;4קȦ! %M~CKlF6r\QkO̗2Q8e smyc;yA}8zЌ?c<õu;tV}z.'Oڬ{|/gCrlq$"'7$l}Kn6K!Y_l"ƯBq |gGD2!?2~ttw| @֧zw=bL-jrLB글%,6!a[%FuPFSfZLM1ƉM9WW*NذZjO?Dll+0Hۓ԰ʝ|>01wƪdQ|{I.uf+jcB$ZωV[K!C]0!CzV%ʵ 270[զS%{O\E5Kyb5uil24(rjQH-BKwA^Pcw[d3$f#rq EXb/9%㵪=m=|/RCU3jc J:e!\Ev͹#T`ʿϐ>\JĮc^A ب# Aw̺H4I8MАuE#[Ynj,`ɘ3>Q-yϛ쨪bմٻJj]Ez4lEML$ף[ 5}q69D+oZNТjmߡ';^ʧbg)N \jh) jg*B_22$24_\"FSy,zKB 9` 搝7):ie<7D`̬Y=T'HL3k-ġ;d{j6Ijm.5ێ03I/3~ E* #)˚[ `ꠢb | 4@sCyjӊ#hHgM%ʉs\}S BLtp ꆢ/y , q#Yge0jhdcuUvb>s%pm<pJH.t`X85g=a=g3PѮ J(AvtZ⤭bTSO)/H1l<5HRPC:de0Vjbp2v1ِ&44:l6 d> Aql ]PބJx,8Kh&`-@*zjB AvEځVFoTz ! 2nPư)0uJ9ePDfBEKVil}8SlFt[sv ƒY ӄ1!6Ks0k);ऐPghx3QC@?J ޛk(,@QŁ.Phq`j ʳ,yGd),d~G@PS0H6gbU&K }JiYI!z*˨ ]{ #CK~ui5(caVoFeD/ҘCG8#\ԥ@R@"T2["=v %jJp $cvH B rB "EkڝC(jFIh޲3 <> DƢ td)n;4Xtԙ$ fjbk ٗ T "7 תpaVB$YYf ZNX@iHWt+, M529 j3+ioۥ38VM'BZFF6Hj(Z` H#/^q4 dG:(5N ImUFkhI>OUā7Yt9;Z_jshm9gdaҸ2˨(CIG[A'CEEHݶ@(w2% ,QKôƹu$dѲzRh4vMfcVfrx0ѐʌؓՆDlwHbC1r7ofX"` r߁}šx˵cᓛk~1;ZA "b^8p sxĹG 3 w( zus[`r :ɮTFi2<4'4mL *7Vj c b %_um3ij;XHRȇ=rR;=,kPBtc5oRE@\cw&*CNk[KV.*f@ F\8/d6O34%0LJKFD;t5Qp2\"@bb p4LYs6Ք rri" J.(h&q 6#JEs7L\- H-҆OH] !C``}-Rq_>y>psmXvj! Λ+UYykח}9}{R9UŮPlq9VѤ!3LMi#x~l~;H.BR\\`w.)48x\K.&͛ijp~t6ec7W8<9?{!ߌ}pGkƞΏnS;=o^xxxͥGM_t|1n;7$ !ݪy<0vb|3+cPGw-d˖~ ٥CK&P<#H<32*dSK ;ewuW:̌^Y8(>'M|É8eBJ(O;H|@$> H|@$> H|@$> H|@$> H|@$> H|@4R ;s :D"$jSD$> H|@$> H|@$> H|@$> H|@$> H|@$> H|@Z}@ac">> >> !{RF/>1|H|@$> H|@$> H|@$> H|@$> H|@$> H|@$>Y< 5i۵tR:}@r$> H|@$> H|@$> H|@$> H|@$> H|@$> zcsZjJ,oKnf]u?H]9ٖ88>%M|lK_eo[BJŶ4U,\!0wlp]"WD;,SBʨ%xFr`"WDtrEQjrR0+ư+µ\~2H^\P0v*2+|+ lrWD w>N"W#OvCFMtQg]e/.WnϪwSURr5 83\Nz?!WhrW(!'n\9V֮erEF"Z\ю"I\Phk+v+ '}q\aOg/WFtPjrqrНnWTVm}*Mcox y;oxK{7,P߼xX}*/VzuqZ}ֿ+,_@0X/oK>{&ĨNTlu{_ԣm.\*~m~(v Iy'>*ۥ|n^6g^NV]Rww+\t NUUt5X)8(MD3Tɦ:F*J~BղZ-VCW@RQ*aLxP-@",]dWdS*LHe$e 0Dٞ[w[ll#< wE5<|3lL>ۻuLa`ٙWFշ/'zkw\MD3Ӏ/M>|;eװɓ~&G2Lp֎\p#Wk"WD!w"hDF(W<#Bd"\`Xh}\ejrmi'"WD|rE)\PhSϓNr\!|V!n \ iW.WDiAjr'PlpT;>Xp\F)W bRR\%HJ 5Q:Yȕ߳Ts0/I] }5a!Yȕ W^U aβ+\m Q r5B2.Y{>FE UrEZ\Q 鹽don o^M5/YJ@&9Mėv"ZW]e gS|[ZUD:L ~!צɁ{^:M.;&n \\ D #{x`FE]RGʕd+vOp!p+Q1ʕ&)H"\>Id=Q/r5B )wBP'HKײyDOړweg^cDNr6v *.WDi%>FJlҌ Iw\ i(,z*Yab G8(p}0xjeҙEWa\VNZ[H؜}[aFs+\"W#+倖amN& OV<\k)L ֞-C*YV*9;lc 5#N[-L4SnLif@ h}"!2;DΦg$WMd#W \mQDUjp"JgD(W^E;%i:ӉX hL0g1nLSG%N12 p7m4 H镖aqXP&1+N\!֊\ Q r5Br6xc"s22:ʕ p"WD+Vi1UT`$WW6+ER++(Wѩd\3$=ѦWE"W#Q7 "*w"JH"W_\=>Mix{B<\ geeTyU WIUv<}`#WBڠCrE6\PW?sL,vxɠ$帜\k6k{+d=ղur@ ZlT 5|!Uf4  FeFC:dSIf4cDG84aL)m8{?fۿև#&'DrmT:<9QLC` #"EC<\2%"W'+ 0+fr+5\-d/WDʇh" #W\!-(]Ҁ N+Op"WD{.0((W1z8=ES~%W䮈6 )"W#4lN :\e$hweW#WzPS8{nlqywH] y=\m?+r)ؠù7Z{@Һ\e0"W#+-倖ϕJ`I iхZkoTG3 S:0d]=؆}  Ls6V @+6ކi ..rE.WD)YQʕʙH+>Ylh]2d'ikk$23$\yѮ>Jc#Q|pu"WD 6w"JDF(Wؙ攻"`'wEM=Ҧ;>w"J-r5FJ8jNAN| qA֤(AR_\=^O џăaI[K™rW(}^ρ WZU9PB`Hj.rE.WDXo3X0m \!3*w"N:\yRbrRѠrix5`7\d}0T< d;R*ϡajHj9CÆ8Q3m⹾/u%ޔr{ӐoO+,Vs{./{؆6/ܔ^ׯ1[%}yn0n E5SWuEMZg=}Wڅw"?.UKqޏ[w Dn?xmwQX{Eiį?Qce8;sry}gȩ'mnW&,LTJ?Mt붕˻isv2X..|k`qt15הl0B;BStYm\:i6Q|GnXl\EӾX7[;Ҡ)_EBejcTc![TCU2Vֺ7q)VMHkj}^iόCe{Uyk7[c{ib7C]=櫷9/}_ԕo׷o77rw\" \o#ծ݉qx]ܠ #v 4+ujQ{tpu ֶUlkHNNWD_+k&A\#PWվuvŮ 1Fbug]&61֮Er4dSEYr,պv 2`Pj[w]n)I8%]lt,˶^mh^.3aWi%qT8xab徺w}QO 41 uN>E[llu]7MrUU>SWA¿Wl`tXT]7Wsk Ja]]5ߊbޛnJr}yn.tK&?b ]7+jP&Lg) oZх׫pwMp*/MSᗰdwwz򶼾nJ:_O6yO=ޟ*^03zĜW*㓓@:NPѝ/7jH ƈ`ˋ_(\RuܯyIJA[W27WTm,ʃ`y}:o*tOJ>J\OryePmRӺD[:T5sڤcseT-Tɶ׉rmsʒ]+T_.qjS RjmjqCRe#sGr0 e4׺kε>=لM^68޶M|w*D'ݱgz$G6Ʀml e2у_: olo1DړuQC6ֺl9WѨ:оVFպ)Xwܝ9rtx4/G{ SxO)Ld*h5֩C?/z={&?dOFbb)4)N:V/q)k-p~w聉 ee 8ET (&h%\eI5ϩ6YXT gPJa|e9T Jbŧ^:xM=9H 08T\Xe0n#D^naZ˝>bRrβ6s=yVß6OntsVFhf@2H,Jb,]yj qGxS4Tz;J}| }Gv (͸ҁ؄&_C?ODž|>vN)@g>c=spd)£@1Q;IE%q8B00'` QDbb .'"P u|eNhE6=CVUE* xOU"k/%% XZY|4 u@Zg쑮`8f9^4ywTӋO+yDݼ`ˢ&Z3"x~^驕*P;!KA]D)r{Hxa< Sydž*aW4J[& v)-NŸHj)0N'(|B |N'5x_f>/6b$$rV&m );_2[516 [Mzl#Tt8+-H|cP3mb>etx?Z諙).N'9mC;8G*ny cd4/ SZ)@<2Q<Hm: ;>x+m`k~fxddѤݲA b0#I|D2fD|ăq[Њ "хe)Sw`n<5{)F1FZ+)1\I˨#ιHN8(!S0q=e_^,SGz%|C ÜH- (Gl',,P֒!daabxuxaKߐp!iX] yLu>ۋP1~*ICɉ'*9!%G(fe&}.2b 2dq SKB&Zs>dq'ӸDiܫ=8y3)ewϞejߗҽ{wi-\\m5hFXdUsEd{L,c05B,GXX3O FwǛ93Gc1{0TW*k}6) D&v_*tc^R{"V+mθ\Z'\-eޯc;n&ݣ亣ݾ3|NQy+IImg+h$N`g)q1ViИܴGG>֏>5mY\[{7bY{|akY{YY>rCArckc(%*9BT"K(A*DQ :\xQqG/C]ב HitL*0S'GaQiʺ, B' L>pq/$TFĺW.F9$g(uM'rb>"Me0jPI?D^rD N%G0 ~pgr8!$Bꖐ_!"2E!A˦ !*:GSSi ˒ST5T'R]8WNPpeIrn%_Ege~S6ϱt4h}clh^d1&D/dQ7ٵnw veizy.ӦSU p-hmap̥X.lYq "$$r&ek+Gg^ΛƫzU9 Ô-2F8]\V;Kή8H&?/Yff0yj֙L%3ܒ$N7FsFm[ywg%h"fm]+ۗW$zh(La:+p%O)>ٴo|(]Jܒߔ]ӫnTyx䂾$`+(JH?{h~{={S0jj*+RUϼ%ÝyY]( S:gf]xj'aI@EL5QvSPBh3"I[0b+`'}nC\z'Cz3L"`F TQ1J"Ẹ7[ǕSL,ùN':ʕ '̕X="WPcNwz}P*Yt ٠V ;O,jɏsfij]\(= "g#q\X32R㉑ቂyhσypyHy̳Y3II:1E2p" E%[aRtI3Db(v( Za䐤X09`2}J!lE N&x8ƼGfS@&LL>a\dh!RwoURC)B&R`JuV1"&!Jb2a>۠uU) p*|RXEd lH%1Q&4Vhʃj'Y捓)KRt5^ @QQZ9NAEm Jz l,Tv|swmж^ΠD@:w6/ŏxn+T؀iat V0#)6gFƘZU&zGXu0%\b :N!ԃu=z[cR;ĬW` NY@SJ!} *:m=Rt}_ۚbKoC6&೏aa\A" I6ݻWoIg-E6章L1dz+-%~o{kQ6ڵ i=RAU-8Oy^nI@ ̻Oo |Hܦ_xQ|:EMC^1Etv(hbRqGAK0:.:8(u@%dQ,b,鬔k,ΗW*/*7rJ.$9&6Qh=t]'k[&/Fý[ Ң 5]YKuRe; VF IQ13 {ˏYU~Q%feZX6i8=b@(M?RK-_=Je+7B3.=TSS:GB Ù8Wz' SB IMR"4@Q7/ q"R,VJ\DŽ8 5\$3*f)o<^ Lk9q}'qPzȝm'}[ܿ\vʕw],`uPjX!:a̐=QS° ν'\B)[(Gkb'8>jK8*Ffd j0D`R]s#`3]3^f eǼP ^:狤4n=w7ͯf<~_0^Ay*NF$z`Lxk4JWe(T`gm2dk^4)E-[T9e{~J 7J 65YH6-X.c Dʖ&@Ѱp̵f!Dd:[;<·Q c_/#э82 jJYJ58GIE\,vsKb9jlae]`G~I!U72k oBl-<R;/iUْ'Fľ<Έ<Ձyqެ6W|=yQu\ Eȋ#/>+L@ƦTCZE{PRc E,{juˇ1 (&*{/ܪ/I57nn 1XHN)}ZT7lMAW2L c3{.O$_hYZ*Id'l8>a&:o&c4*&cmGDc2^4%Je2'IiƀX%YYص338[:1vRϗjTp-Vp@4;Qw=w"e-풫ur)VΑ70ELtҦ橶 wng;>{|DFkAB%y E)_(dP$5Y)@ma{QP  w+ 8lE *%cBV;> ΖaXvo +=,sR5 5%x(A@'v\.[M¸^_<}Ba!YJX >"; | JA$eŧqiKNzuyG 1A1;B !CSCYEGG2kE` gś2G'ѫ2RfXWh\s\"bJ UKm;gVe, xj_ ;cSݛΖH66CJ'ڹL-^=L$5V BCJCfm1֫lVȾCu;):A?`i/icr^@VH8뀄18&iĹ#`^m>]#6E 0%e )A%R&A}<Xyec-p/w0Xpтˏq? sH^ tTGTJD<3)}4EZi9!aUHN:@Ja|AJ{|.%V);t `fq.*EZ20Hpd2i(SBH2.0H=*="X1x>Bβ<,cN]~U3"#mrȯ ̷Kf4mE 9/moϞ2ϗ).g޽Q%TDEA")g ΐm) -;eU7o'wzw!M~#ߑ :2s;=vodߠbHd*`4pJ3Lߛ0Jff2L@d ;pBZ75,bS5Er4@>{_THI)T ]Q%Й4MT٬l(:X4}o[_ԉt٫Ndə(M:d6s̡ȐE(E,Dϗn(+ S-[!1qYơ4F+[ؚ:CEY>q;{/\ 4z]jxvY)I` M %FD-J x?:΃P?M1mp9?MV?Kl[L'>G?X|ζgVC;ٓ[ur;NxKr Fg|]>Sm%5~X)[%EsiڬyT;ceF9WݳsU:qqkg3즧dST TŧYދgaGz0`#۽w,uWw۝̂3eG %JJF@߰9sLkvIk'*^XNa 炕96}$H i4F)>CTƐ%F' 1("XԵ@RE%)u'%%a p:e/8O5<$Ec}vF_)A- H"*ZIc8et4nN6`_t&_b> 56:N'6ί/1o.yTJV{4@ >@# ?0C1Qϕc;D=[>-DI;L$H6 aVd#&mc^%(}T$|[ KmAD#X +T:L38ˣ/YO&{Czp͜W0p~7b{ ~G8\|h:;,g`#k<5! Zg>[xRCN@׍&蚘Vg0Bi깐UGBVCȊ`k(` 䴐ِNJјMY h?>} ZE[\?U:ⰺ3CO 1 W #)8ŰriJ$og^͑LcNܨ^,^?\6yjt랴1Q,2ڮG] Of“>Ly_jFDԮyޛd$e%pM5m! 6hBpKw:!٠ՎrT:&\ aJɬK2RXȱFlJUMKR%,qŽ2g3اՔw}^ذf}~_ׯ~[;#;g S߱Un(+ZC}XQ;`}G%3 b`}UEk{_t#]"]{*`+CWáUNWiR4$uŀQUP誢w(Gcl9p&G+}lnp vCkqj7kGl/ v+jߩVZHUp(tUj;]U@#] ]Sqq~Q]v's?Mͣ_8MgC-8&:M^ǂ X9,F`ZoY-`䡴 &ߦ/揓+*lCRxe /+1Lc-m֍/\vI`@'s~2\tf('~~2JgF'[gZ8z$dRnڤ7"(4b84]J MWNH'Ih]%~a0tU ]U$NWR Jkv;x N#+^~`.ċ`Y˒A9ǏY?m5v.s'&mM*&azW^Go& ۚz߬*_?+M׻3[?GX">[LZ^fr~q1j ^VʳПi%QIf9<_/.QZ:Z*uBmіګ*Cl˺gUuW*XKhM$))|ӷ3"Ҋ3Ms6~Ik[yu~ݾm]m^{oӶ}3y%AN#\7VѢZ( 5C Őz0tUj7R *ZUEHW'HW`\K~ 20|qo7s D,fWqYuq E99; 3#ɌNLڽY ZWD zgSij)_RD{C8B$FעV# ۏ[ %ǀXr. hI@-RSPz pxhQ*ʾ=YJK@ULá %wB;]U0O4Fd U1+ڧ.|8]UF:AY 4UFNhIUE`+%ՐA Gշ\UE(:Er`ןE3p`lQZVHWC+rL( !~j'KW= q+܁p}^*E C7pNo7{bJ0)(U{lcp7NWN0k{N4iG@8 ^F2D7 Wc7~U1*nh%n`Q#Pтn(u D|'Gwt]Uz0fNEKt(J#?{ܶ /']HsJ9v+UI9%g)0$:ȐlgkċO[w(ԉ7?'JߥӀ]W${JbOn&`RjGO=~Rk8QVRSbf%Tk'%\?%;\){i{{ fTo2nR6kꓨׂOgb5/ƨk@QQ;i% DQ GU{h뭰6w.LoFSuԤI{& d |H/H ۄ0;ПP(?{Dt>|?|Y??nK h\a̓3B_?~4-^j?]g1_dfb C][;tmu-oB2BRȒ P)+P%Ŵ0 1,b Bd(iri^ aޕS>5W+W8`-`Y]5p9X/0ZFy/G VULZ"ZDu >d:3ɝ:vC(; 1VJ6. =OyᵕqOi[1<<W|Zݎ1^5Gdfw0'pb|51͘LU`p.fXMe_|Xؤb[w^DZɲ$IEy%gQЄx'/,|2.V+VK jxXiMi^%F䶆// x3Y@:x͜dL5:eN%K3.QB)j4/N܀ 6]K-a\KXB^Qq&)2m Ʒ> ?;mttiA-jtcΟ?ٮZc2'/&ܹ{]1s- &fe!%{|2]#{mx=m.~mKJ/-xMHc}H9gTBSҒ\ e2 , |L"_6,-I6/4sDK%kH\c-=5kN%l3B#qOp0؝M3+<%p.ԆEBI0z:%77@eXkᓀ,eH"r(4$ײ$%2Gi:1")<-& $ ,wC֜=n!\,Z ` ޖ}_IY/hz۷uYdz_](O 1“ t^"2bN׿}7M?}e-BГVpB':cT" /y jg1^B^k1rKjI;.)Ol*. pI{RAxB%QA3jShrX5 ko}c^#?Ñ>v>K\nq mcO@b7~Zd_.A̰cq0Qg @(fⳆIV4>+>=˳&PAi}V HZYi_| t4!DSTh g$Yo?}VNUYY'N338jnu#;qCN;aԗ³4Khzi􂤬yLi ,XD0n QzV^UpL}V:.L)/8ݳցWUG(^y!%XNG\AE;QbP+}hBL:ב &&,~rpɳ xgegDa;]&=Q3TcћA0%AD*H蓇 멛 rm/S7՜Zk5 U4ͭxMP@[?sxN8UfDqa*sfd\4TWӖ>;7sEPc$}:8vGºF+l;dY`rR 1`]Ҳ/8%\e&Ծ&g #Hjp œ!:hb h KJmw}3mvib3(I4=- yKV"+qVdؔ43IPabG4)ou:Ƌ}մWr&9W"vgO훞JF^׹nOϼA'\~&nh!bO3߀,͵ Oxyfrg9uHwLJ I :zCL8VjtQܛO׋Yj@Mll^ z/Uځݔ0~xF9b}%|bX횹kXO/ہŐ "%P=+z[w6_,x [b z~J~f\zKm(I_1QgH5Ctani%IĜKA[7>S DOoa:9/h~kBmp84v붋Hp$}ɄR"z} 8tN:ZIoR!0 &+Ye{bW |'O_.EYdx1Jٹ8 Mܷ0rk:iaf @TQ!cY m)!ǒd|e%pYzsRL`ӖEl% /T<=G߳zQIw"zu1tä(IR܊/c)n]qbLqxiA`ITBS[ezͳg POA=شU})QJ+ rpW1!4%.D)=%) Tf lm[OZ-g>3R*n'm= +f)^Z )81|/h`*#Lszg=}XƗf0<Y] /^`t Rq9Hc} Lg J< *x|G') Zwh[g`Lvpr $NP&eb27>g9qHy0c3gqV,&,l4Ml1G|)頜M=|0ٿkýRC* h#bh  6@`Swj1m ?o@4[rk4gM&chI|bK&[6g([mm몷BNo,Xu~t9nCyQиyἧIyd\?ݛ7ko(QLn>y I>]qSD >QZ'.D;!+ʭ6Z2r|CZ'vMgl}h >EQ[ʺfijIhF0õG#x{m7^y[h!\ Ԍ{l([FUqf< Z(ݺ5VZJ-fs$Of6 w:F!@! $z(jArxXb$\hf8cer;M"n/&u@ۜb%{m?3 1u$&bWDp)2%Km, . zXisiGޝ2.eD@>i͒\*O~` WZ#ֺ<ʓt2k('r厉w j7Fƣ˲ғsA刋*c6P7{<L ))*&FW`~р PX:^fZ3[= %uK ?2zO;M6N@0Q4N#R)V-VZOw sʼMl_ .$y4Н~ʠQ-[dIKRK\J*U $<<<&RBPVtxF|R%u>H#ɵ&2Kvu獵Owݱ/"M%͌luQ$LJ8>$aMZQx7W9n[T}!/ h~5T1UBSAB|K+?sҔΔ7OR/Z5.#땣p!ͩCO_ZL7=|~2'hvm 2j,S%D{cE(\υ0J(CHFQ 6Q`f5{NJBt_F|%NHX4] "Q$6F@yvlNv@R՝٪9=I@F%$zy 7Ҙ|s`f}iD->`@ofؘ( Qׯ/e|uR|Y]G5yZo2);ϣݺ3Ý"IqWstJEYҨ"BvcۉLq6[Vvl(CbY#hAh`XL""j,I6湝BQD*!Svװ !',@+dEB\\a.^(,~ wf0ha;JCL JeiأڊQWp<5XNR'\".4C%aiP0fۃhXh0 {6Q.+uEhh C̓^G N"+U_`I:uuG׹[R eNg['DŽaLޫ4ǣ:δZV6ժ"Pݲem=z4 ݪ+ly\$ViسSmq<,#0FyTR)tl9j ʴ쐜yBR`-/ڶj7GELNG+|Ϲ`8zEG\&W"G]Gb)ٛ{V4ٰGORNKn.+ ذGŠT8rԳ۬ڮhIZe |,|c&^ G3aǹ&i%0:/D2ČŔPT/AW|'G`3f.5‰@lWv]Z5hPlO]ev6>k[\*?i3j<&ʷ9n-I%{/:x/8 hduKIA.AOGzN%cW壓tDd"j^' SE'I+W=w)/8T\oNTMi-嗳bNMz&V=Eg2i^xF³|?R=$"ݝލ]};\C^skM:>/bY _MY,F@$s< @!RU,P8ˉ)~==@O}Vxb@% fs0P)8D&!?Ns^~sŸR :sҎi(NX#Bͱ5ĸ5qf%">5B,]XhT-qc PJ z2C9O'OsЊuݛGHiS,jRKK>vkĮMS9!9 :̥ m$k?@@ הGHRr^q>S.u;اTD,õvtQ1YL!spy^ y:0}ʹ0O:T4ݯ)D~R!SC.y`2.Ɵiͤ!ns+@9@y^<@1o|?U>OTr#EtQ\>4؉P&8+$Ai&=R"(s!v+ -X @3 2]^J9@>Ѓ=F7~J5׹Lzgc'Lid %h_\N|J9JOM6 3:8 mh>"8`6vh<&.oOͽUl2u Ur74B~ uTQ/|(8LSڧ;<5N g?$~9B"U@)9C@@\i,' Р9oy@jyVfoM:qtWgq')]W B* q $i 0'AJ@ {DŖ1nxdHz 32egaJb4rg%߶p54%1"ދW/Ƕ]¬׿X&qag gDHaok',m3Bh6dw}{-[" {\wX Б0Sۛ!Nv>iSw_O7L>+$E  `OoK6qSaåby䇑u(|!hkװeDa""~-V4k4UU==ꅖ 1BW,2^8+ɾ{ox $5J6'>5LltSͻ! f2)D .0b8#tOdVnץ>aA]IVY hQy܊V*4l#u[Q(?E\&7aHӘCw|Gx.pqij1r@IJ f*Ւ=>-cMe bxMnv>&'/~mCnQƱ6[w۵9%Z-^1=,"ASqwo$*2)QhQe`}l-7H2g*4M_ om`q9>^FłV|C= D^n<MBzWG|RY(7Uж{{R;ZXzřA\3tD #83I$SMOskMr}?T!i JeA S}7kCJۧxԶ\U 4x6h'-q(6AԑSe"V0+>͕B<vVk 8Zyҝ% $piT̳=/rvlws_ދ4{9 ,\w <zv4  Xp^.|],j,\'L+GWz4lV8Pks~ɤo0$T).ΣyiĂ^;ʋiN>_ۣ߂,WQ,l=Rb¾&Fؗtf?ފIrX˙_k%*oIqIUY&d27<4HDZIK~6߂0WUjc]0ܐfw'wwXLSg=Ӵ=nVNHT̖-䟕ְu[nmt a[*'*E^ JXuoO=AiQ,N3TF/ D=>p$fh9uBTL'B#JI%Jp޳6$WG~ {Ol,Y%b{)Hw7cXU]L3MSSڰfDKI+CV'Da[pd*t B~\izRhߛEM{4]ss!r.Rh 2^2aQ1өQ5xDJ%Bs6jV0|L]G|83 hNӃr>iVjol+ L{KoJ̦wիtVwItB?97vjcծywfQ(UQM bMf{, ׳[y|q$ʮ}OL؇IY%6RRx^ܑ w@B^HNk{ 绯{&B ?==ܣ$(#sϮ:W=ە >CS11#9eFRGߓA;,¡)]HPXiTb 8ﵗm_F1y/@o)"BbtW1z?;gfvto Wi^A1﫸ݷ*dz3"B~93wK kw@];ܥ2 WH7ħ*&C- siOI!DJoB3ҩ-H̭d)Ҷk){S0AE9A^`p`l49l<4 "CƏNM/N?ZPD$-l/QI^K8!A " bDs=X"E4{HIu1\BIv5Z,.7A UL*P5^νJGu,xfy!=6{Q1qTf=>8aB{ `tG7  n"OKsjaHK*E-1k-Z؉QQi> ? J+3Z! t;K.8ӼI_5W`TJqTJ5"TS2O:ICñIL+R[]aD7a}#d&ᕴnx0n?7^Nxh8P8l"MB6 ]k7kG=2_#bqK Rzi׋rjBoGαw+zZbE)_WX(!SYH֌3?XEHLJux󭙀fd?~j>Ѧ)h"+We0St:AUXeVμH ?q6j|5-/ɔhL^΄[Pl5pw kA(ݑ%JɐŒ?%yk>nhH4͇1JfSP*$Ϟ Wќ[#B^xQKrO#Nx /{$4K&Tiobi_Ä pv-h7||3h@\ZN"ׁB-GCړGw{G+r $NnN:̨>7}9a}f#S w;Sd;1E<ܾd ts,dJ%g ;c'B;= *cf7#Nwy\ևOLM)܃8蕸&˜)a~j)0gBīDUz㖈pň<B,wQ&A"jP,9gi5l~? ?$Mn̨Xmvۆr-YRy_`TܺV><2I~n9K*iƽ&cp$ͩ_Ia)7NbĨc|80n>?Yȴ0i8f^VzOQX`FëTL+(J^w /6Z(DÛ'1tRܿ(qlʓԣT('5\ŵלCPy㦺իĖTyTK끸fW2V65m%RAA5wu3eb~]pJLKGQmLh6NEYá5r*9g-NY"nVIsZWydI|p_||q/ը/.1wU{0Z'"^uf.F3ӛeQBR񼒮^]XzcL2hnxr-qZUKĕL\CNL 'D'^q2^$6r[##Di4PL%WŜR\*惿bORF*Z{<ۀ^ŏ_`K`?'w5VUm-;bSjBJ7"?N{?V>\vAO|t"z. FSp\d(`u)9 C/Wod'&Xxi4X7,rEBPNx"5O?*>k) l,ϫZ`P\C{lLcůƩuW}*۹" = ]& ܝ* .w oDh{%"M4bkZ@X~F̣&͛L btFNg8CF 4<ƎB ;F4rM$rTg|Vme| .{4k~;keu*~,kٛd~Tre c㿑dؘȬc`3@ tjF U%n*7I9t;)NW`wˆx WOHݻNFFλg3R=8ɬ.4FBqXpT+d8Y΅[˷}c^u0]Pg02*TVK t1Z1f-][1Zo>]̇ %lzq CcT+eԜrݤ02vrɁ6` ~wa9%p4 ȈL&JM`լmq%jY M+ 7 d槃/:?s)46ط/) =zf}9ҢIw&oH`5gkߕưx܍gvrxe{]QH=uꦍ:`I]UPl82m"LT+.mI4ۓA"ɺ+V<{[`m 5_d2ϩcj5u vDnO#\$ZF많 *cǪ˿?.';vH6#Hrȱdu9x.{i1 c՞`oUϔOE$o>~'ZtJ3۟6%?+|9)u%l=_9x{mی@hw}֖KepYc{-tE$9BO^,@?, x͆QW%jZ wgU$(A4(WlZd/V-5[w τXg$J`sm/gv#”T,QAdP-v~J̘d}*bݭ 5l-INrݼ6roe& SQkXb@rw-~֑:f~7w2tm5l nǒWpJ?݌Vvg_gso7=0fXZ'Ln8=98RazGܮp?/qa|0GoZpva}z1#)gqLX29V\$vLXT2>*jy)kg0-E&ss,ydϜv$13i[ 'P9w_)[f>Bp|DT3LAY<+Y^~m q}eSlT"g ,GqJܯ3X+dSl s𝏬ˊrʼnN/J?E#pc I]4,fEG-Wq .־S9Ps4BRJ!Ur.i/sαY8MB lq$,E2%MHmF>+ |jO%CPTCֿ48(HL !眻㖥 6QlEHOEZUFヅ`zG>&@΅04n9g ILMTc9w\ɧQWU+n1Df#)tz: N,B Za-tQB3 nb8"?Q%Fr 2*d ٽ08i$#!FZdYd:&i#ckNg>v,ƓzYuc/ng Ɂ"f5!3@Bɠȅ'UE̪?۳1.?%+ό3U6_G謋v u|S6rn#vY//_E|wt';n&iVhMH G3r؍;l.1Ayj4ƀj+d\AS.WX^8h(9cwګ5j4|Œ8J<}HKEMIHg#T M-ˣRh >͋'N=E3dDQ\#*:F\s2&QDr;H E1VR#j]fBqέ0瓸w)W.k }F1rFgMM9\eg2BL=j6v NuFeI$?Pp++vQp%QT\y pn::HaBIBo;=A׵jdoN44A6͙P3NddU8~Δ|wz:-;!R@Ho8crT\טQ%R#>Ӆ#R͔} T! ɵ? !hvwF9iljMqV6!>I}FƘE߁H&"gLsĝM"E )p&F4Gy4}(X&w:%Qy1ФR\~A'Ke8+U6K-,YD)OSą& nUEgv2qf3VÊyk1PDBƵ8}؀8 JGi=7=.rgmG 7gc0Ő%ϋ%\ wPM削b"I(r 3!ϓ*"$r!QnginHx62] s׌HܚbD^9mӞQ0=b|z k^-k  bjf.L=tXbw>0#H,Hd.(齒~܋2N$z~6\HRUƧ=U#h9їL" -Tn \TX/f'(qr+8Hg:6ܤ*F))F!f e:Js^o7iFl-Wis_c5$RB̲;~Tb8y8vh*<҇S ~;5]@q C q:"ܓBsEKֿ:Qڑse"I.~Rbp+5$ǟhW\dӊ%QiRKƵѨp i9+U[BoGEkR# !٬qZ%y`|ѩJ w>tr^P Ӎ-oh#^xEBԯCWn`ERn_wˬKXP{sg2w/VdFez}!hF кϢ7S&QEbКX8.?梳Ox59 SjAeCugm\RCD{奢1uTh+3药4ʣ ݥ`T>iߊP'` l@n 4@&W̺?蠼% ,3 uӊy6LNYMlãǪ`$]!@;n艮@=JT(kðV;Me\Y>w2pЄ.}GZC߁n+uKZAi*d ܗ|F(| W|pag%ʐg {deE˗gW5Uxq7|cFI :jف83iHS(jiu: ~fI-Q&BJ 1I$-Y?(JgwUdQeQҙ YJ޸fHK)HաDŽR~Δi5O  CPaN"8ΈFػ)Rm.6QgP!Zvo5QTLw褹pMLhܿ<ɠ W ;mxHYZeɰH$pquSBb j?q5~a}{j:}k@2&7<ʗ7Mnk0og}}vՕ7bILkPJk+W3rSHe!>p>G遾|58@-2wHO"z-GG/?|ϟ6IHJEy;eAدm|1hS2yF1fzl}kih(`y]B% d7D̼$QCwrRxv iTX!oL5WQ\77®pGst< mtflpTkoyy.6 IVJ@n_9> і!d;Q*bp,=x[sM-Ʉ^o._n)mjP=bHԖ7%A2{!wsfy8s?~`NlDuQdĸUcwŔ;e^꟤芇yE$}Q~d3ySi. xJE?M)NGI+肊OcʎԫP4&E}HF P3^7\Bꗟ>+ؔ2i՟>'Fӏ_+v5kL~Ynsۧ^+C{4sZxM&l22P&h㽏'{g*N_UDꡭpL4;&ў\].r/?FHBsjX":eQ_n[4X7!JC:0N^+xgvS4(D#( 75bTF% w-on3J֗b7V䛬Kүs\ -psz 9s7 U:hSͪ)|ʛķ/kzBoR8Z@@e R2z I:e14xeVR .d0[+$҇lckOӐb} Uyo.VE1^Ļ.]4qqU1JNImu]Z#YN+If'}4V@nMXNIAj2P~+U#XnǣW5-4+kjQsCLf4Bf%q}T$zVvېB3",㩙L>-(Jav,@%0u`C)l(ϗ3slqvppktK5֥ؐɘ$Kwxx<-&3VF .(TmT3Ǻ;Wva[v̾RJHASąrn.Ȫ5-#=[gc[A[?׆7{ /I<zG_h5^,(u@.'-&}Vg`2zԫa/Z(727g;E+:ͳҬ6|T1$Hsll`F=+*}}#t5Y%XEwdnm6~DG]Bbp!M;JCKiFk d߮08Oۺ7T4@is)Y!C3kfin cƊ' B0|\FgW,DCY g΂\ '-"ZQי)#9gm )Jdb{V< 5kk}0ʳ}9MI9޶~2t}jOqsF7b0/wȤ̠F0>럔WG@/ a$O束M_߂GRx?LVն)QRʾD_746.k;}0rx F{]5xamHzO ό3t4%((\+MGCR#\ZD^[ǧy6 {j-(Tȍ4 Cgj˧ԪoIj~ˀ{hLT@ 9=I@m ;B@Pz&-ӆ@d.ohP'}*I3@wI˵䮙2OtVp5d{=#6Z&ۓlю)XVjѽr!6lHto3/|JHdLw| >c36ۛ{ﶯo_oݬꍢƃj&bjo*͛XJS*. ^7at7靏n똓V'HT e)jfDbިȅ9f9# \nx{giLdc5[+X}3G[R5M\Z,A o4-)$1p1GT1EB6^!O\ )AF֘Be׉ȬYmJQ7K1loN&-M dnr7(|sjlk2_?t!ҐtWϵ*a;'пS;];i~?xp͓b`92V ؎TPt5PVR5B(-42 .j"ms{dܭd&?L5VBrA&\b9.9ݙ-5 kQ΋DV4wLBWYrRF8.p 2$J ѡRx("*0 {*Jq9P)I{Da"PD]m^5&If~)T zw_9o|-65K¥n%r.CQ,H\\mS Ah/8q) Q$:x")(HJy%I'H(QbZbV9J)L$RFTaH%tHjh{kxhBxdW/6F9w Axzg"%Be唌Bzisu0@B~ y-Gar'*k$d,ݷެ)> lC1=VYg}u|;=,I\(ʭ:I&rMF!YO+55uRG@%Hٱ@SLy.`v2b > JP+V. 2Md)j)) to癖s- ` q)2dd@xD'-łI s>h)4ȞdJQLJ2)[SR3 `e%}QK:"I`w/w.zDZy6(arO.E+I.=$J`JKD&JҒhndEi&Y&$xDV%[B2+!Bl%ctܾN!dHI<8"?~3F̔&@Ï3/~x)ن߫< $OF ?RoF8/.Hp~]y$*1(+>Q#zm5˅W(d6U2JjiZs9īm.)(߂x}d.(FmfѸҊZLĨ`ݞgĭ)A:R.;fֹG\$̀Cs3+ǰR,Q\J |y_-uNg4)͝vuӤ z3z2:I#/tօ,2@fzetb_x8qǗtO/oQEdG3v̆H:,_}_K=+~S/86{~^tjG'j˧S:;VxL7&}J}o[xշ) .6QȧtxTMK(Q\TH; djDD[ ǽu^{u>h\b{.z[g:^]ZqEO_ctd~MOtɟgqTt8hHE 0u-}B.[,CߜߥvbxćN'N &`ӭ7&IRvӗ/C8*=ڍUzIopzyQ57Bg.P60zMl sZ0{lAϟ/↝$/ /%_7w_#\Įf}ո{Qqvp慾 oϘ׋ēӿ3 ^op{#i"_!7,PPAxqqN~X q2Z3 h nM& =D{7B?3WD3V/8լo= b"TDsO$7Y]^H#^Jpv'҉b E& :^?bfN cK۫Nn%}UsipԼ`us6Rd2g\p>ݱnԅ%zX뮗\pM:x| : >@7v)n ,]r=.$ݐ kT5se ~X4 $vT)~&t|=E& u'ʂ84mǖpɢ]1& F:M6uԻ7%kURva`cKVzhj$輠ƪbHT$}Y/(SQϨ2eVLOBNpuq< z#Bw'b/ZxsDZUǝ ijs/TP(>h*6ッ^e𜡑vKF5V(l(^i߭ww(RsgN*8 nR,m%b!^OMޖw1@jy7sA2?'bz^%/*jR"j`ªmłkUuk.23ګW`b3JQ}tUEH\ZCN$JLe.E۔QNb:u {໻@ jƦT)U=^gPbI6*w5"6'IЀaS4qvevnD'!+I=aHM*\VГF]t>kU6ǬN)kA  J3k-fw h|+J<(N=(Ni dL<8˶,dP+E107SIվɃqE!4~QjyTgc0;PRQ6a(5Y#, -<ڪQɪ+.,^.b>o{9vX$xMˀ+|=l4ۘsFQa|bkUm&u OѪ!b$h} IccE oEԱмʷj_wzi.Q(^ZJc[fFY"vQHN-R;V vOR+H $I[ d{r(k( -60T}7#^}J3)3?ұUMO>ۼ{fC f0GPI<$^ Ջq :ZI#4`5#5gڍl)~JDAoYY[OC,zN{7řK}$,UCp$W*xXGlP(g$`M8,TK^Wĭc`(` }0#vȆqH}H aXzvm`ϺE}IӾ}_NfDqTS;πǺ66[%0`ϏЇ/ XA)n a;rNVjhœhTyNt{|'lGIE9Q;>X<q ^w3a7=>^ !]8a`<:̉X&vTnBnwQSTğr|up JֵQZ y()(?'+BVz5#8-`Yb0~%М-Ub7cсQ-͛fi=]ldw>X娏z{[zsER~gUʑˋ3-QZ^raVDSguWNO^z%_{ͫ N껋t*5^>\Iϫpӧ6O_oӹef)}E\ {L&/[ͣ.J?/@-~/ f %ݔ49;uH'aYn{n!76 aąh==aJ//w08?8m}XʲUd݁dotRwaE@ȃNws*dD V_؀ǯW Pnx\%35l U:X g>% 2#`z: :IʹV\"K6[~ yB66I/<}/ys8Ed/9Hh Q~[O8#TkgvגUyԗ,0(=֞\ڳ)KYwQ.z%WK'}NGOs3 bqxiYŬoӻ쾬VmNȢBI{LP(TZ6E2*SP^SLxř8Bx(9"fđ 9RL*ex,/</`ki%,hx`snVM;cْL~%[XVA@x33\BϧGFʘ'5#ŸfT#AnLW$yMSk`YHMc.=vFjSdg5޾}=䉄t|xˁ),l嚠f`¸@مӒnI-f.PLQxϳ {n.D {nv ɯ4"Y l:ݚ`WD-1hw}Σv6q3vf"Y{͋&Ych%YL AM&YM3GܫƸ,Ph ў^xH@ўvqN#_ccB¦cP^>Ch ^ x(N4Țק͐ lhDNW|WsNԴk!,Щk'̙Q|Z VlC̈́E}{ z(wFM;lmtveI[ćq\ }{]͡zҲq5GzVZAL.]0*F\®w>CtziQ8BQo 69yS\i4%C휚 nH&2Y_ 'ӣ\|:ur<9>+f|z" sꚽϻI~Wvs9?}yݟJfsM~zū닋_ y_|?sb M>颞>̿zqtnJÕ^"hZ)M\Ue2xuojsEi2jmFur_ө9~s@ᶒև4fc8$uٯq3F.c6) tWCJ0$Յݗ.3Xl)YzZ{10s p2M CP33yDLُ77դ۫ 2λ/m:wDH 9եZ;I!Akhy5K3@cen P!gl }ɢ<>xOr-/j%7.>~N]- pySsXGuͱ)).0Py`C AЦ+xPC6 CQ[k*CcXl&2Ji3v󭵎 EF A;ݾ~S ڈ<\x":dLҶumkJWf&C<`:%OXЫ _-yhy |N>Q-1#pl~e?\+zN$hÝe^݄$f,?jlخ~pBDf{!}hk乯:{A4Dy&HQKTIPG`j' k7 @$h| )$ˈK%GYkO`_&0rmR-bhƅo0R#Y8F9/{ Gl # .S?9)4' !}dML9<$O2LuV:dh\D(')gj)}4 @cl|q,%JzU *K|V\OA j2Pj#A3tMA hX{' UeKs؆"FoŲf ;$- '2orI #.~aڙ vIm27Ξ±f L/\Ń 1c1@Ga?ta3 V[aAϣ)zH<Y}Kߛ}ea^nɾS";_jbžEv}e j,HsIşSi>Ҳ 1 tz\nޜ7oaԢzCh`f]]$0B{"oG,m9<%ax-CJ@ƃ5IrUrxHK"xp4}:˱cc5#ő7<{vfQu $P;F| 5K9,q$.Ѥ-VNoszkQEȩ̏`p|tCNcNq T,clxHsK+P4,MSҤ/jOظl1D os /S QjpC'Er  !Lj-֊s-uj%؀|`9LQ:@ $OZ/L3A\7 [u+UÍy!,% 0T6󔒯eVZe7CGJs/DjpNxHjrpm|e>z0uz̓+jh&li׮*=ܕ|f-1&,faWRts"C Mk2.wVW{_F\ǵS1/Q[ UoP8e5d$ͪeCפ439ޘS,=Γj[",Q8voћE6wb$!H_b;1Tgfx u+ :<bN֚ͩFn2Bn! ,G4"IMxO 0mŬ)}s;Y]y2%cw0"kF ('hJfk{*#NU:扨M "y˴sy9]$Ĕ͞R$Cߟ[I-JfFqomiO#Gb i.s\`â৭v[_RY*Rʒ[`خR`{TU0XMΔ>v켉Bѡ ibV@7A?>nr9hle n39hAYV99[Ь鸛-hf :,h̏Q3=c5}G`X^nT}Iw[ޒ1DZ^ YKF3jMF"k[0=?,huŊC:gS#h`>H-T{A׊Q_@1lߜ}S'V#6uAgF_[Oyެٷ~|8shX!e݇V BӞuMͲ \ֻ zC~gNrc%^4#z M糖9h'Bz}„q[k +rbq_(9Z{Ic`Eu3!%ST.+w2'k|nnP9<)'P)RuAbڨmoMgfP/n96?7_Muj~=._c/6? ]!GmuB]^}*#֧kx?FVmf;{3^,6"Kbm7\x=.$ _PDܥnJ IhDyw^h>c%Gh1W]?`ԥU.*W6df:oaЕ2=-VW=L-AZIAFE #-zP⡇^3t?7mC,`G֒Sτ#~S>}J#S^YE`4q>*8YZ)})|N,t]7xjٛM?\\Diޖt:Tw*RW1V< is1ն}b:wl$W*BsC Fm.9< s5,V*nl |=9` ع)Ҝi,ckv%3LZ?١Yu=5JIDV4+6r*z$--D^r$(aJ8[8 jg2ꅂqw㧺Ѧ:&½j}ͷ_gƤӖ1ܘhoW $wui*}PuQ{*8#}!8=@\~O!8ޝþ7mjDvO֐t%kv䃵{([3NguMgRk k-S-9SqvO'ݺ//^ҾH@2qxCQ}="5'%u .8&{mnט5ݺ;ozUUI0}ͳ J8{CJL_Zr޻ dƭ-׫t~|Uڧ˳~V8{sudSLN+<տggWzY᫋_ch& GG^Eo#ߏdPه|-mU ,Bzh*y#VMuh>GZc"9g[5ժ&)\~.$r ;'1o^.̏|-,'{n<%O!;q; :|p[K޿rp[eLxc4ǜx|ص87{/ߴy6% 2#wϖxf "VOFb8tX=X1yզ6ķ@άצQiSۨ=Ma޸l)3 j6WHBWoGB\[ ymIؽ8v7FxpnA٪EJV`8Xmw¤ q,$ gWiJi|n[}s, ic6S(Dh/O篊m^cP;?®"́Z묙XMf]}I |s[WjANWj WFR= }Sv+zvwaΏ v$3:nvHRoaVz-{[6_>cc[K̲1%o7~QɛqXKc;^GʣTA`qgd.ٯwqWyH<WKGS$3yQ3hŚeUG{jȣ=f7v!xɶrV1݇`I!NVf2I|c֙1j0/D Ob^YJA_0[{:蝢R&n^:쫪jcs˵9E:_K - E;kLWTcؘGc YK\h=C`z0αhiX_@ԔthEl]'L⃷KqaERMrmE;LCmmL5ܪjre6;գ6:1Q}v@fyԯ8ghR +#_|<ջrxeR+qZ2ƕ0bC.*:A5_"'-kLIK]$.B +"Ysآd e%5S⮣Uòat0xmwNw"b^nV$[E6xO~A6E6+AP˼c2JjkTN x'Y7ex?0nIRYrdkW c< 8Puzjٙ;6}7ǰ4'dK_/~K*~>W;K]6?}qi(S$w%yf5`Gt0vYXz]8RR $ hME[}Ѩܹ !գ4opvWReb-IJĨ1kpS܋0''Pq>(ω20֝ξ9G\p'"C{T$ 62xJ $Q:ݦx<( 0B\:`|kwH¶XY$O ^wÃ(9x7"*@mc59vJܲFW+aV`=ZH5,rcׁOB*sm$xܖ9蠤bZzA '\ݛy6%s`  UǮDVitiCumUaZE ʛ|1k5K4,Gث43g-z+Xa$\Vv9ՠZ !GW.@%,(IǑC.O:@Ma Fj5\ ~5Ξ5=Dh/u/;АU*JwQq@;d &E-8wkwt=#ƕY唰Q?\?7û(n7~ZIRqPUѹ7rcd cg-RV[/}ۗYz9xΠ>1 ͚ k<~ODb8Uޅ9U0G#~Uv 8p3Jtyk Lܱf}zÛׯ~6bly]o>|yݴTg"R_چ쾷!`▗/[Gt};{=8Ք 7>Őn8c zll$?P1i%>]GX:$C[D6f} =nc*dAu[KN<3F) \XRSQya^Ui 1j e6J\?{FA_nfg[H$.& n`3V"K=Nj߯ݒ[RK$&dU?\W߸)^j8lP͆ڹ%wD.y1TS4Ȓ:ZbPZ%7? h€\%}Tjh:5r)>QUh̀*q IpDvMJeK+ ^בR Z/lbQLiB>wp0pV+4% ōg8KNYbi% zx8XɊ b>Ò%Da4 9?5)]p)T3/G4Br]RHcT-(Y"VѰ *P`UBE+eDeJ<,.LIO)&fbR+&%: w1ŤU֫u݉xvV<*%E Tmo\1=sfγ5(y6ML9n4ZrWqÌ~L 0AМ3%!h mQ %7/8)+iTbU*&mtHFBcMT){R^!Ý |~nCǗV1A)[`yO%nE~&Di0ʵ~k mI&等:¯lZA.TO9զ~LX=qd,*OVCHDl44Fux(=ig栺uŰzn ]zz̋Tk.cJ:%#1o'cJ~4.^.]<5hIWrl-M(sRsWN2 (o.\6}N ";d2 SREQ%Krk:F]q8a~ 'Q+yc!Y4YAiA)K-HSȷ)e) MNv]ߊxQOѳwaG˷koDȅi7[-Hv ﯦH&T7=f6gBh?sݖgw+L_cU R}S^Bgn!ւH1F?ʀ&YK4a$86VI6!"#E7B U{gPv*,.CNCv <.9 ,2Xġp+o~808y@ .bԢ$C=4Mm+l8`SF^:fwI R*Ȼg/j!JQV:0+*NYPL85$R$0Up,-%l([=l/u+,3Ff4z+HVC|E0H6hCwE*73/fm'6 ġ.Жzygh-l45) `7hҿOՁh#÷Ŭ72ܷ?L܎>=2F=m[{>?G~m|4WɓK];R]-9@rOےJJ2+HJ}:Ƥ_O|k`G*R=SRW7 1`w~|Fn9̛Hl?|~j>Y[!z![f`tqk,qO[R=-HAR<ݾk0T%FYo%:M G㹽m?}qJLFs/v[6hd0T~nQ8JzNt76$M0{P}îK]4S[LL*ijǝ3x LSE.v6As2}b3/ Mb6I<(2Ʈ'8.Z]\V)ays̫ѦJV3RaS[pG)6KŴMa?Csd#qN+V&) 5SswjKɢ{<H8` |J}a< PZ$ EֻIէӊRN$5!رn;6oVeӡfM!)?!pqYjԸNkX'%7 EʕR'蟯aB_uxIPJITܹPRMP {RYv' ҂?jI$UYؚ <]lҥ-$f$R|&{MɥZdQw51sp%!&"Qag #T vֵ3UX|<˵HN%/9BsH~ڝkWCi \$TV?Scmn!점*TY?nOd;7AWtRLϮ7Z  h#qmH\6׶uDpy(k0P'QD2MBo E aB#0YJJ}?j>,$6n'ᘲvxy6vVQNc0hZq4STե8@ǭL8SЇ'% n"N*.P١vת>OUA]JkRִ+QkZ%W UUVЂ8b nHVᰊ g7ߊEnK#PCse͙\CUBU+1lL3wDž;T ҨPUHvϗ9z6ٗ}տ{Yo]\oLo39 C&#0t#?g} gJROHƝ ]7}U?􃋜]x륎}L&it,(Hr&I)1ۖ )b-TL" U$'FWW|؄F\beL^m9KZF0>F`(~~ ŵtv~A06 V2E(7PBabS |SlܮZ(4h,F>zÇ#\Kr8 ܵ215'mjHy_d_ L&Fs|OD`(pGꠊM #'`q1HPLLB7G" 911cIQCKTis`I‡S?Oo⌎)_(P/*s/T$!h(㊍!aQ`, _sD'DWL821"dȏ jW tDkl I{?Z{10 īA7m([I\$31+B6JB+Gp[j % M(.u+ڥB $u^kz ?g{[S.HE); !k;w"@+vv@DV\%?nnV@Vq'YGuū.z!;:KUT}goY`2-u\cHnt'd,!A# %쩻Vq̉3hY" <*sCgJ&f|*[ɏba,vFeS S?XܿQh `M.nۊ r+(/JW=^k›/HA6)jRIbQ*(8v-#'ku*>||~:?GX(_Uc{^EG˳KzK%(#Gj^Sm/&@֚X4IHJR җHmXR(}}mf -jo։лW~;oHB#VۺKRcپ evއ8DLF?{WmJ_Fb"$ɎzSC 1cYvoUjvF߻#vG3F1W ôb*("e@:J/ݐylyRnCDK^`# k W/ІrXG-AһOh90 R&١L]oƃS NuWI@vh6}&ׁdO5O`'H |IKC`nuA"Prg&eԘ1̤:ګ]''RQo3t,ijR]gg幏Y4}Y酝b ݵP<9=0C!h+cm; qHatah',?24vqbvOU%ġ, 1C]}Ba/M?x{C:PDL[ %-]%Ӷ}QNl;W]nnV֡gyck d?_WMsIQ{e7qhV,NbEX#M"Nwq k>}i}ȗ#F6Q b,aŊLW1)A99M!k}/(3>H4shgQa1*QZ4kֱcmYg.op~6\`rŬ>㘛k?u7waW M45jv{SyvDʹK&jVV3'kkSwFwĈ4Z}A%K pTDZd5#Fs5ϲP"vj4p@=yLTF&A ጔZ)pKxs" `D"^ȥ2D˾QhL{ǻ@8c/U=U^ ʵ: w=%?qpW`@J!uҹF|1*XT|RT-fq .x !jbBDv(ߴhfTF"zxk[W(Z s0~newLW}ThWI!!6iʀylbےMɃK׳@qkYg2嚬hly/\.ѮkG'NI0*$B+c"h}")v-}Y 2}|;u%ZG/|hZ(zN87|Ҡyҡ5umsu"Z}hY([ww׫}̮'r8Ko,6qxAMiUS |ׅ/OUF.O߾hN?\>1omp)6Q,Qp(7tI pML/}{my|-Xڒg)̋wܰ*[Og7֛7.y/#d}W;o|m?AC '{\*]3}[tQ@M 0nlVا e~Ƃ!6اv92.@xL@^+H.oqMM"k6l#atV 5?-UO\m,sI  BvW(v /:6܍f*N@)# ͫPX$µkzF 8rQm$|'07q$ڨjEfs rnŃe2_p7spry?h N.!<?آHt!\rzraəڪ%㢦'o$Վ% 'd,4! -$EA QUk4hVp&hAѸH{-1c*qF"xJ 1V))ٌ \$~qW&f:=[f3 \( cON)F[B+ سJ֥=9 dN|c%>2ʒ *7 ڣGňXr|=:OA7%now9oLDEI<_ ՚?>||YOZ 믖-5mΞ%ۿ^^G'lFjmH5?\o B>~3-mfNy?+;+<k}[ȇR䏟\GȹfĜ3k) @1Eə˂jEeelźJ%Ye`m}Љ0+h`ٗ<"\dr69O;݈$Qh]`Ю* ]@x =S:yT/f}`SɤV'V":VViVO@m'r \wuY]|q+{*ihfߞ<DIW=˻k(v SFz}V%֦H (o<~/\ aސAptЃDƙrLG\iNh?W{ J ᪎\4\,RTQ@lmn|;h1;AqGjea%ni*|AB]Sٞ4 ;7=sLư5l!@8'Yu`ֲ[@`\{&DVk!U899󤆋룆UVUjf3MeP"OO!EI[ E[ g1("} :W I$lsРJ)t Q.a7*j ("C40:[hD8jLN-M@%3' =$`XF=]y۶*Z<yIzqܛ"邇&(LzlKP"kj$VԖ%TynS8% *AD2JQ@ɂ+J^l3i`UUÜ@۽] m6~B'ýtrY8/ <,sa( NC<YI"rRSxOb%edž{١PzՄ&UqTJMHyaTY<S*HHkkLƨkKf8F-uU"QY{eyҦքNq _mzݶTzi8svu z-ݠy#-1}WAD= QJ _u(#{Ivm/&V=s}]VO/ΧgCOooN~: NП7 _>W30r`9 j|(?a vo/Nn(}ZmCax^G_ތ4'x \8}LK_gy g@.6[v:W`E׍^?w$Tm*Qg1[t*X0Vѡjh@S6tHrM_}'K# ,:goIi>sd4_G`}pI \LMwo_g0ߜ>~OGX߹iǼrx Pp=2lW3Lrn).mthigskF(+oퟳr9ko44/ m'S/lN'o_X}[}GX|E"|*`Xh~<kJ 7/ ṁ,-\Ŋ\ݗl唰:3)1z޿ݛbGelDqлQ% T⣭&,/F`ڥ-P1 SR~8b۰Iz~K7 y^\TFUI>c8Gt1NX?vÏugFÏ~c<~ *1~bljÏW\Ǩ*!(1t4rpaJ>0'|L&Ċ_l <:.\DI- c9UTީGYTEu,yYTEYT- Of_<Xo_u)UQA'ľ-ZO?PɇuU>*P`͡Se*R _xЇ4dJ ,hhunl⮬Lt4Nf= 6X_};[\׊H%e{KVj&4"#a|o_lsHxQdj=!h=,`h}4V$y KR8 ʵEsfo;GL1T=UݪKmkgEi.0m̗":iw۝ޡ_&dbu'}_Ld P QFXhl}8ed1B\FQQ$HU.!IvH2I~NCd$\?$E2e& O D[&9!{`HRqq$d.;%qi]h{(پDE Fީ=G!I!$)P?uHC<raA'>}!8Zj|"1G4K3y`P2Aw!{Ϸ O KFKH֋lE*^TE6wXÒXr?[aKvX^d,bj)(43MO$2ICÒbaX_ <[,#/wՂn9*D0"KLëtV0<[G(v +RJdƑ@M85eJIKt1w-դ1CRKLm" $>a8%ekTQµzɼ[(PƬk ڲP c(5yAMA/ԄTYs "IQfWcj6#)mQme2FLF -&K3LP9gRQ+P~.J)Z>֬ n4ruxAa>^\zËlyD͇G\B-%nt3RX\bZ0Fp̨MH¦I̻T3SR3Jc}/(:dT .Lx7 B0YEfP_i )(,4F#gKp=0y.nLRE]sX&9#;~U>8/NԺkM:%NWf{gF_nz XPX+cXVF@Q(Z )%7 GC6_A{DmYVpTXc%J`1O C7aufgIUNZзA^ zC b 7+\cxJ/͝kYWXYՊ>֌N2{F6^FP'8h\' | ̜1VgD$kT$DRICC-,!*XPI,R^y*M-&ޤr25bѐ)ueAK4cIc0gK@,BiZa`'7$BC B"4IJOHCb Pc82Jv)3tF)KSJ:jp~#8wނšT&4z_%\y~ 9ʃ8$H{ u8_kb' rIe2^lS#$(մ@4 >!J0CTuh/@u)턄@ 5Ȑ?,UT!QDKQwhTʅI+VO&F{a2rHe8[P2BJiD&r"ŕɼ5)PbZ$3lH#pRogh5e.Be:9X` Fir7(EBg)@}m>K-\Rh)m$s.%}@S~\y{/CFh#yP mNrjHdY_1Alb2Ĕ6$pLgŖTrkԷ&wټOh_(-~@ދLXC.G Ejȝ\jG Z#wXK0K@ {حIM|!LW.)>X+}("5b*Fh k4G!esF$H !afˌ"xA_f^QXqqGQXnאh HD  5}DX[%7H f !3m|Է&)2F23pr2\4h"U f)>GH1rAf{RH (UOE)CӀR~DS$>@W< vO L%#cc F-'1Yď&-mh]]XeuHB,%*eC:48ЎE%# TUfRf[)ӂe$Jzˌ3BPHw@wNH2&'%׎E*)jLKQzy1h|3񪱝ב}o0e*ߎGI> JcP*IH)שTbMjU)~ƌ2ޚ4`fC'4w 3o俆<:yZY <\'LVfE^r(.gr1].w5eєd8(L1gI!8U#e*䮇6">a؟2vߦ@D աM̕: T_cSR<ު!KBư>DoT$d8ȾmuHHw$28Ar"ܧ= H69]eX^)JՒZ/zeTՂ)*jyT oIӀ[ G~RʸfCD'.JV7 &Ն~?!a +iQ hU2yQF {x\v\`/D2X }<6drXw.)fYr# m6혟cTfw"^b4ArliCwβ,ǷU;:of2M%DVa 3ML}(wY"W:j#ІRf7j!%)7B^ F p[.f _q$RY;x<͵"nP7ˣcQmpDLxNéIYCgYq$Vڼno3 ؑhNzw카,NXAp4z/¨el殠F ״L "f S2Dl(P<,H6镣Uj1BPNq?1F8FpA#g`Rk3\YBjDrEǚ,wҀ/N})@T=ol@c(L 0 !z&-L%91dᯤ$AHq/NM+g~w3t_6n#34&؊˖l3RWsTk5f Ήqd) cB@[#Pk8{<\,?sڇw UW9^t~9>i,j.!z@O|c,q1'^_U:]{YU 9]wrBylOf}FZ C31i2;|np./eB.(0TAV-Ps9]0xm7?=7/%;Jv.(&^NQ^e4\;ڔGrcwB.Yu_%˳`X 2"cfmE//&[)λDԯ#)M hw~;y4rc?<7ǩHb~`)(ە'ާW˰ Cz2'`+T%+ڦm6bݧZ`՞]u簮r{Ґ7E:Ujyt t:w+A~whd r0V> #ӻ!o\Etvj|$!#zRN7xN=(SJ`ޭ\ևqSlb2x٢k.im` eI ar.n9u{֊@Y > $ +ݐߘ0˩/ lQyޏHCq-{uWlؼ^<-UR_!.w8)^ B~ݲ.m\nxܮaHͪG \9"c*Gbžܡ%^[z"*¢ȑqN}axk ;o p t NgkBWŢj.U=V]<5:ժi65] tyJ󊉊l, *H X^DolR]nhވt<ȹ)FK4N%]yyIׇq-)d;wSI@Jy:hFc`ޭ|8qwCC޸TcwZaLp|!Džhd8p|)4䍫hgbw̩qLj~ܗV`q2i7&e-lďUW˺;ݻ͵zU@mo=_%spY+ T 8mcm ϴy@>ӵ`dܲ|BQv04z~-.#34i~詌wSYZq>A\`Of9.Y nBNR6 Z6'QZImZ[G Av޹AۧWel!QcN9/̗E<غmb*1Gc1D&kwﹴX'> }$⠊RQpB*d!NQE%d!k4@E1 ěBʎMC%_b;FkZ"5rZr Ԧh 48W1Ъ1re)BF!( .@,`rټLT\V'+[g2TdQKasA#-rN*[tScqe%DzªڂRAE-Xe%uV/'۰Z:7"\ aސ;2i!Q0 TT 76hS mI0\8^Їq-)'{7Q=>w+A}w+؅l[!ޭ\C޸U8bJyU3QVe=#"[1e==hW:f򿁓g品s!ddNru ,1XaY fVWYl> \16`vK>"-JZ2t~ulUy}AdD-I-սs& & qqépqPMƃ~u /"]^_-/{7e`ŗ'_X(\ttAZb9R1+,b o^zLTxXӵ Qs%YQⵑ>i,cFX%ZwiejZ;Lpnn*fu9|m\g9c" 2ADH7 c-p廯(BG%Y҂^b9tFCЖfH%f sAъXDBłQ}PjB>Zڐ/Jyq/FK)[yܓy)ԇq-)A#L[)rTmۈqR\w]LhwCC޸ twaJy:h4䍫NLR #yZV2pXfg/rd4!Qه\]-_)\}?dp7Yxz&U]UQʟ.*mW?qYEE;g1)yE*O TDk^vT~mkPJEp 9#k@qȓ*(ۯPzs{zBm>y-WLKIR rjc\K evM^9/}x ?\/onN"BfTXgΙpZ K&Vp"b4VJB L;!J}BQ\͕߮rA  5\b<'^bIVABJP"7VjJ Jگ!c_unzuJ%1V $gFk|@Ϫr 0c8>X}M =1ja+ev+w[yoso錪ڝ\Zꧯ&&0Z;ЮVןonPïAm|AmEB1d\tA@FOI@[RX}(BNͱX7,`*c[SgjdJ&] 8-g& a;~+D.7; fP+ap"R:I,zue^FPNB9H9cϭS1J8c>^-y.p.Jdh"RCO%r*_$,)'AEλ'*'A,RO ZqG^Dãf/"1bt+3Rg%}FjѭWP>Jّ_. ʆ]"Ǜ ]6GBO{fYu~g7M>ztoR I@߫)mMRo^g!/<Ϳ.obv?mϓ~';=1yXUL$jV=Z9KV^=ijpW=i_Kd)[e[Q {,8&+NRʻKLjnB KOܖm8`g_SFb*T\g?^߭_]uhF \bV_BWѫ[4՝auu{RiXumofk7zA [hfMm?`؂bPyt?^]oou:" b⩫!8js;…rTbw>=ϲZGstWFBdbйu]^:~QI7{?Y~QPo|B6, 4ڒRzke&ʎŤ+OHh_"A"PCJ7;+,o6}I}P]m w~2AMb?>JU:"_6 xΝ^N\~PS%zwWL H/ [667nM~m_D7f PM4RzN9ʚjm*%  dNsk&$T)"k(R[uU@&eDH.Eʢ2#ʘ`mOz˨u?"o1HJl3{鳘v;}1,`ԂtGtlx Pg| j^pJ%MCٻ޶$Wps+~[$ٙ`aLfv,&kFtz$ ߯%MDdl"_UWWUWW+U`T{kAcՕD;.p:]Ypɾjnz'Р$Hڍ"8[BFkvru;.L HB#UFg}BEaKA2!G@]r#%̉ PM 7U{Z7aL$;q Cp6~fzo]4]};ٗ%~h gTHFrCտ7f8}c$ #DpaKA^://;~?GL㿇{M9H~"7u܌mG28>ĮqoMg#j؏A0M\_nYϾl} YFvpW; |? K$| GƠkvkQaC"#RrL*"Y(#03,RP+t8PNPΝ "Ɍ+4޺3-t6FasV ~b~/C MVJ,_FHk^pTt7U_ w?wfԍG_=8ݷn:t0rtUGZ|g7ǥVaK8߽[ 6ͤ$-!^P86v'MtU|78;P)9TAwAD47* -Zke#,ӷ )($a16@+(2 ۅ!E`#pAXכpH{1RIׂ hP@K̞|r+$t&V`*;`J [kgfs9]N`eO$kon47Kp /L_?4UÖO &߂Pf9R\s|7t~f0O.Uۡ-rR֙%k=gGs&ӱ`863հ&{'Eϋl۰Mgu -q+q:fKJɣ{*gP7@a)EC0J"G dM`ƙ5442EmĖxŪ8Mۣ< W;ybw?V6K])di⌴=) B?y_3/;PhB.NxZ'v\Jsk:FRɆuQLHm`p(;N<6 :S5'Ctw}@&;{>57qIGog/[W;FU PV%MgZ!T?V[ {ϴR TaY-8|sυTOAQdgts@$ʈ-F$^#pPF,fA&~Nqs?Fy §WjR p[R ^"4P<*~^ DT&h1Y6kƒfR udNuE^syY"hJHFVǐBaq< a= #0YދDZ+%m淩؞:SN TN'c6X3w=f[ j]*\tcY|Dd/n:99OӋbY-E$h:-P⒴@Jc+Xh'PE n< ɥ.轣Q`bj&1󹻛g[!t8:]KTY(^8<ЁaF/Bw%VxTv癃v)s)i.ZLTd p޿O,؄SqXxIIJȁrޮ̫/޽J>둻sM{;cNnG|O?LS_ynj8=7o^g "]kgFf6΁8_>ʈ#w󏪊Wư7oB3|3l5ub:~t%?ݏ>gӅ_3j^|=y fW]9K`~xxwȞtny\<Kdפu!Y==ޝwZ'5,3 [8m `+.kXJM \)D_R":Z_?NrnbK59}8Cg%M,IjyLY>Lr)j~m}#93:zݎL$mE;a  w$6- 3 z?V_sl ezاڤ,Uz#S:TLi!D[) ͮ}3~GatUXu&T[N !  8 ddBX`qGGWɈoC)KA>}H~RsO%'oUFIÍW 6^6^id=ԃWF=}iZbͱLn3p3zGzLmtnJ [ `2Z~3y&S#q> =kVlh*0y*>ywf0p]`N'vSM&^0إk:nTVK ӅLL9z̧e{ Pd "L*TTikR)\(Hh=̟j46[i0!uƽ'~c.hș::X_Um5} X,t֛j4 Td v*͗7dj/T`ogmhhrݼ JxUdd2{&}W!D|x )u@F))EX!@HSM+\B(E J\GJ.o;;-7ڿ@ൗ{cSt&=; U3Jֵupw`}"\ [WXedMZUYRngn)TFL~!f-lV5\˺h^k5AZ)M t6褰I\cҌQǑ'(s қ8tNiLNrv2$Әi“8yA8Й)͙RUoAڜz/|$Vx-K?!Q޵%uR῵ͳ ƨ vZ1"4F ~k SLD`]0V8H@-V8*օ.`6VLVآq.o ?|oUdpv뒼 '$I)@py@3G''~Hz+Gd,.}xޤpvI%1zJEȻ溧s~K GUxiK/2Ihؠ^o J2}hPj5I|l+*vƵ\rEԈψNߕL>~ Q';Iڟ[~) 3_A 5. Y`x@+)Q0Mqlhڐq(1DSJBB*,9GD~Ս7uGD0mbv64褛Sַb=C!ݔhs#qhK7~2g lncs-ȸahC+b[0u& 91N[AƅO)ux5X7VXi`xRmλRWSp8_Nu թ*ԩ%[rVP@6|*!¼P]C%sI9J){ұQvΌ淟ؚ.2"M2&ވz9;^\Ĺ&CS9]r-:ci)V>UhM#>> SF8!!Jq$l`BSYd- >KN# YP5IQh8Z>bO\BXL (͹U*d[c-Ah/J.DR,NH(8~SK|k$W"] iyĐzKXpMvt̖ mFuP;yg85gT{Bn,]l$;n kR3}8x:J).#{_,\!${DǂD/`gDh))]"!ߓX 2Z^>#МHewB'5\<޹(g O[[ui4{|8Vya3Gx=S`(@n -4(x6ƻZ__s2Z>`OC:[q! +TDxs§MGm$[AO]1I_ER6cIK*J%s+*006IQߩSuluۼ`{wQݙOtdPp]očtmˣB>SS_D-5;ˢ=@l2՘_sw$Ƒѝ?\S+#}Jُ5`SWvfA^Z5¼?I2RQOۙlnJ@5 CB!2m¸]ҿG%+RHk.uMv6s?MsvH;Ds?d/G1_tQDE=E$UқM3 o#`Ph|׮6,.dVl.9l+a=yRc&@"+:ȑQ9ȇG1D8efxDQSwW)̖jQ;:×C_̹7TDa C̓c&O(٦@5hUX Pnkp/aSkZRW6WƘ^ :-[:ܖ]\qӲ3_Xt= 4~( \<.^Q\ӅLݻ4v)Lo鑿,: 籿~X #ꁱDi{_YpWcst_oh&iVf~\F?.to3fRu?3/2`65g!|W$v)`㢐ɇگ\;Pv@ B'W`,ATH1 ow'k(?\ w'Pes]>@UwhI[70VOΔ|9u2 f K黗to $V9NX{2 Mz;Dh aLsb[Ķfc.9fb[H@O_EfE<\t3S|՘i^YR.6"$lֽ@/` 82qHRS66z HWԒ8֞mXrRɶ5<7;?0w+'ep#5a_ gG:ύ9WEN$fcp6@̕oǘg:7a%:a1KׯĎy8%fal9rzqϩEEhacBɠО(Y\joxVpwK &J.2GYp3b%5"Hv}kL}u@+WxHC5ACNl6C1>fĈY arn0NJx@ z`%6Vo?fT>})ݴ  ˀ+Oҍ7aP|Vj_dKCVǖ69zw1Coz ucpMH̖qڽlVln47N<(YY1[@OR솞' aI1j |&WZqlrs,t1D:~oQ0`yz1!~Y N+pGJVr =6tp3vd dKr@+&ZkhVOl@A*"lyZ:qy$Bf|' jKX?toy^I 'g(8!ç7ք@fk%*:u?~U5 zru.+mWA W}QH(xX̨7%x'}8 n^B`)Lr&L]#[56ƆW` /-IIf)'sn)R]O yd<%soKO6Cn+_01( 44$Ba1jOĂZwfxc)cXFc`Mj!`BiJWٞf;QJ%NLP'?P ,짇#;j܌HbsU?7S:+RUJ!dWK+~@+*(إDZ}N g1Ky{<`lc2M) QZky^aF9@}llF@|c詳 {<;>~{!&(3؍UO3A!!^>cFa4iOA%ZƔ몳MH-I|*N24kV7GZ, H'^`Y>0[v)wXbČ]!58d;)+o攝;'DKx tl',п Nev&4VYB tW. `KYo.!&Ln'Ȇ;lYF:Q@$8\2 u<:K@?'DfB&=>greaZ_? ]n珪ԤL^ * gBsxKͪ$>]~2413(k={s\ylLxqx z|HnW)a1;3eux)cv!.qO!\{``xQ6i"< *p&SZ[/1ȝJ`BcoyL܊1-/߄8YUxTkvX^_Jٹn8]^bf񹙻 h7J k͋aoNF0Zt%%.ꚼ.[{_)Os0+WNNU~}+e@W~ʸu#Nu"rxv?V|+Veg1B/ ҞW/JO`kWK-S~L#Jx4{P@d WvIA5IA!^Ҿ8;GںIFbapy19`fw ѱ>ygCJs(?q,縗m @,qj_hV? Żsv'!.\םr ~8"6t=]memߨFINjRDtco"l^!$#@i(EN][Kw2yBHبZԬu}Qh`fB$칦M>ȒK\ bWn^Wdy!?:srtG~vurQgacnhExAnf|S1E㟧P} )}tK^y0vteCvILOum\Jޒj-:݄B,y W{JNe"0[mK__/o^ip6 _ތ5L9(ۮ;M4o/ϋ@=vqsqXy$(Ɠa}]GFpƒ27k?DH T[n($|MB\ Cٷ!_9 Z*Gw&H %`T,_C8v:U־Mk02Cԩ6LGJ eq0W h a뭯-'|oO]D_FN;GaF4(0 ׻ӚXnoV|7B"@悳5Rɡ zR #(, vԝ- _a'ݤfVɜTIĀĀ;o)\*~E k")qJҖ)X>җZ8Nf랤 G+I[!+sY ?`N kD8Sc,*@L(SrQh*MW:AB5\g\r:d'HzF 0 0 &> <괏) 7@{^<0Tܛ'+jtej&TphQ/=Nw3puO&(PQBrMoY/4A@>Lrg˧G8yѪkVBBp\wijj{_Y@WcSxCW}vr#g,4$M:a.G[0#5۽`&ӣwj \!}&L&0p1 |Bdp5yq_KBg. '0,&`2E&b: C,rgAǜ+zԢ"n{*%> Q+>MJXZ@t;'{]K)yhŔ `;ٷ2;QU]KW- 6b/ [3^وw(a[wW*sA4F@nIߧ4H"uryM0"Z1fJ _uy{[\vO aXBrӥ&;ǔ rR(Ō9ԮL㦀Q<׼<]p=qמ2IiqKUZD}W܉Fjfi)08 q1 gI{6+|Y!0],v7Xd$}eIY俿jR#QGM6Iq&ۣuuUuuUˤ!׈c I?gj"W{-fW!J:}IT9pÕN9AD×RhO- 1pL:á|"I?dd.ԁ@VJGIbWӉ{ܹJA֔Nk;Ǫ`KU匚Ww[x> /Rܩ R&b9*æ&imx5A@LQ/I,C\\~㣰;J#ݲkṡt7yĂ9 ,><,v;)^-,6e.B45JsB6뻫2c~>k/'=e.>e "]`.ߌ]:uRVȢFC]A"Jq./<~G>ϖo@YT&?kY9?9gs^$euV$+Fsud5nokpActfǾRo&I=O)lQzĕ[>$sA))썔=3J?Ohx. q%pQ.ޙk弢e7RRߜOΒ'F5(};)_1~'أt+7̻Y[ Xy~zi_V7O &%O!B飫#jhmtu(GěmeCqikՓJc :ṽulҽ;Ux)u!=>)vc8J:~oW==ꪽW{+Td8Лrlȵ#@ȠC {aFXJՅ3Cn>us%#L# WvYEy1kՃBCUEx_-E8EZG"1xYU ۷gi^`8ŔhbDT~KR);= XŜբLbHuk *q>٪밆-mUBIq7ewP o{@|{goB>8y_ܠƆ#DS풀=Mx2G{„(|qn/s8~O3cB]8XRqfI*۪T>IrǮxt,5*.٤J_5`lY4+lw~H]!{B-=_ S7QUvÛ/v1"p .I*g =dD%z?Oz ` eU Ф#ED?vq?E|/Ed_ɾ }$*٫n%9ˑc01y  0*K6 BH.uWE C <\b5,5'n{&/Z5C('v3 L9NO3bhg- gy'.#i(GVKAV)-tzyM L+{*BCQ3G7QI6k6q1oHi_vU֗do/Tύ84'}s&eSm{V>1duvbO-ů9N1 jj{4A0lŖ1DDpЂ AaaIF 6a_μf94ړKĿ_?-R@eMbRFL^f RB 2͵SFQs❵XroBKlrcD49S\j$͍. kL2Ww'&:*&gHk-5UQpZx̑4ĀM3)x'< 㘡g\!! (RWF(A3!h5K2溓 pGL}^~o)RʆZ֒B   T0])puؘj3M}&3ꬠTŹ@3\F̳eArpLH>{7 ."lp) =µfdSg-D e)u:JNC  msp16Z"H 8gqȼ/B1N‚nN to6 ݝ#ѭv펝G 4NfgYbDPu^_bTdjDú ϊNURN0ǥ(sŸ[O#&pJXEsF2HÊ(/MCfjHD=]B4ɢvqdb&Pz_ڍX%_Ux/2$<^1bK%N!>prR3͉f2c;T~WQf1+ B *{9X uu‡+L>* P ȢWw81ޙָqĊuq+E=\ǺOqkq6aUp+8˧a0lQ* e!@Mـrމ a0w{zod%RUb*morfM|I,{}67enzϻ_?~6w׶, 6񻟒W[yᮮ uMr{Gl.U/,‰0MYvm+Ͼ-i+uS+rQ_7'$r{ڍ=vKA褮Df:T[2ڭ hL)j@"b3{_:ifN752_L+UctCMI7mm5_]Hȟ\DduajFX@ 9|ymeP{LImFv{D(S^v$ 8% ʅN\)tx*pS^'2Tjz72 +y'㸺I۩o2W e]:Mlإq䱴%W:t!$MW"TB+A\ z+q4T I7 E*!;7U3Oɠ n DLkX<UG=JrGC1Y9Fuԓy D29>24{'s̷+d{Ũ\y8PcQ=$G[J9Z>=< )8Ji%9w̱רqVzxQo ` Epڲ7-˧s8?XL[&޺ߺS*$֧^-UR\OgB>f fF?ThlQ%ZUC2{h:`4qBHS$4I̻A}`}Ɓ^I8cO"PF:&Ivu_-#ёݮzEtOs 'iM8tXbBTz ϐe(dhbzDF9 d{CP<= UxVV&kYiCD~c!9lKl\0%':#bs{ATҔ4rk PO7 (HdlN#>Q# Q(4Q9@>N<IΆH(`:K1r,32@n)҆9O<>,k/b{TӌTX 뇸ˉb괳+d%BجSp¨.˭6Y*Q&%EV!A;3Dp-Ln\J,{N,"-ۭ$O6l✶sOmRp|Q2h0jN9 | R9('6j0@[ku X|3I0\BY9OȭEg"^j zɣC JFP30AJZ]{p|ͩ+iLL(SyBAMeֺ@I2P!p{`;nsojj 5>'g/H̢_͵B'Cگ@RYbt94 S:F>)c tXG فT䣚<|:A^1oCF"OPάP?OބxK_>S#/wniZqpz8۔O[{gy$" >"u@n+7b90oI6Y^/CD53l9#[]Q > %0٪yNf;"#Q<8 *E~]/?|91F󚼥GE^bԑ*s8 Wc):~mjj=>=`Ee蟹\cv/ҽumf!7)_ W<MIO!PfwSJ$}?M-ۛdH`KeJlsk0zNaD()Y|+%%ـ= XRI%Q>YRӒ%âp%[{uL՞TdH` t w0:9s <0G`{IOub%Jtk8N b2&skmH/X]/p.`0--E?!)i$bUWWUWWGF_mkcLuٱ2 N 'Gs`F `EԘ'gr{a󭬫UaV1gdOz'{j.BbdA`LX(.EFX~ܷq I6ًP xczqpIyjǽ&]k;N> \(Idrq)}xI>Fծ=lȟ }GMXiyKZh2E(`m!&>^\4u#]KAf/`7agpys'. #&ʴ )ڛ<jɭ?GDlw OÇ{c(8 ۦϟ7P2Q1.@Wi }B}j+ vDjI,/߇y˫쨓ϗwʻ$^a.7k3~b騡H>MOVzWgۼb,nrY\>;S%DHJF'P01x|bwPbsKf1<7 gRXki_/4.bhb-3!i0& })bN+ Y=d sLh兊<8hypQ0fؼ1sNK\]MaHzxxYK G}GǔrPQ m<*[tyc SoΫwFO;(B>{YlRr̻û-7 -c<E;jް y|[X{w~ -cFkv_|@DustM{X^Jm ;+(d#@(e@ ֚!F9˪Xx7v[dnW1756O,*=9&K`$+n@߅P]fLة3iXAϭde.kzLr XXFO\+L%ƒ0E _.oI߭0ƴ g*d6<M?e@.B#b3*Q"LkP;x$!/\DȔV L3VK Qu\agRIB¦u*~p%s2m Skx)n1eeqxCF(Y[껯SWYl2ƨY.hDy% {LUfL)Y|ݜ8u6KEM;wBsVi)C1A0%tQW0yOhc2 ݋՝Эu,ژߚ" n6eqQ !amd  KPL`$.̗.j8F,Uz7䩂1eVO|L51!<aʂ/5z"Xʤ-Ҫ@QNGRQ $JVlh"D+ќ9n1{e9"jLr %71 Yא\?;R+0EHsdpƃf}40 T6%eaKcCz]@j&KNGo&6oN+=]9m>t( g@bU/5dWtcapĪo /lRdBK eHb+)+>S-M+gּRz ~YV3~9BDIYx*Κ+OsZ f/[LTϿWM;F_/߇47VԻ 'INg?ߜoY1s~0SUy;aCzহU$7O޿.4iLOc;5h)$Rll=b^J'LcOH&I>*&UKzYG}V*%i cB`W9yaH$дmgR(%% S\|jbS0ݜHU%ק\^4]&8;v!rJRs7" 4:H؈U&SF)x>5@)xJdʒXSVi fXpM͑w ;Y‚ ־|܈4=X}]ܖt$ B.GRs#{][m%4ވ.b7myZ0ѻ4`f*kgږW&o0 A=5 <sI!E'[ЁKXMeY9cmN|gA1rǍMvj>|Uwwqjg1٢ gIy)fXq葘p `'sPZ.!M@qP_::BJS-VWD$ֶap dTwP!Z J[JTOfJ[$\z\ HJQ2ixc4NZ˗Cd$‚F ,h3$rh>;3{SV U(+x_i (1#- aHdeY*=]N߽9=TNή.bc缡>^O5ɷF,IvpYa[yX%A*{5/hbMрd~ Vy7ErۮEYXtre,1[YW猰ZBrdegr-a P)âPzJ=.=h<- ^us$ ^EB.U-a/x TmxǹAǽz/L SkH#3L/:ezmМGe)K%,#W[deP W w] \ cLvWl}(3=}57cK^h^R(5+PI Io N02y %Ӽh ؕĴ538ֆA +\r4H#5YKSɤCJG.&A(byPEBF|\kBRʧɋ!H)2I;feڼ*ICd"$1$7INw! r$7!0J7BdYdM G'pYRYeJk Jvn oSb͝'就Ϥpd2.VWT#'دg A&_$5븓#kvsj &)ٴoQ͟AN ZxœĘlzť"jpNJ%{SК䙲 ѬAjP-1o7ӈ?\҇bkr?mV|Eړm#YrvI*=,'d<95 $C\t⳵8 .v%L=h*]GГDUg6ӧh!Q%D/[K[_;%!F0/3eO)l|%Bc,c|%){Dr0.,ohm{m;ƌm˝;=wMrCwr#+wH\H'8YiC$^t#x^[bg36 ݾO D7ÝNc-9!. *Hd Gd8/FԢIc#4GX:F%o\-Fx跻MK"-Qq*y]/SMbI5RŜ`e <ɕNmCE$>֪ J#ELH9Cd$p"ѱI Q"BN=T6AVl F0,sHúҧg18u|zqO/qi=dBz/d\oN`qհKgR;=z .Z.2֖S@KpU\V]b18c cw 1I'egIRVSrGyȰ7D[k2\9\ClTh1DgG*0h' 6W! q[6_#1!S`۲wRʁf,P,*m l ?(a/S_M_{Ahd+41 x W^rO B>SP[1i@Qۛ9 -{>-M_c)P?O`"vKԫ _'TaDQ8WV 2yoiY6hR}~(S<1fp!0?ܔtھpqyNBqkOYUҺ\XP˶US!ib^4E ;TLT%=HӴ0KZE&%Ңiui7p+WXK| X=Za)Կ} k^@ XM$]GM$YN) tAN3}vLX}-6x8ՙiyۃ/[p%WлOOьKEk.whd6Lgq6@غw&>5ZS[˲ǫW|ܣ,aOR؞O$fp=kJW ΃k a}Y}ǾGm;cOkDk޵lc0n06*+w˧O\iQ\h 9#?ݻ!-QXdTk2wL(6b:,|P1zD)s<t%lt{)XN~-N'79)K(;hmNPLΘ K.k&?iX8. `׊|@}2Xa~ n2 .(C쓥߻4UT9-HyN+V3Σ]ҷ۫6S =uO(/q/vDVf9ǒD`ee$oe0I$* Dk"/LcrěY2KMX2yv]Z3,HO$-7W?ʂ7}3,K jzlLOgRߨifc6tg M}Yh]=\zrQΥ(˖$/.dJì[EY ,<H5 `~o~y)@H((qUlʃt:m`ZPWK؆EL)I F Zk: F#FL %*D1޶j*Hşuff{-;_}^mA`v*4:bFs;[4DB~7_{{e'^LY.w7MI.0p0h/!Rހk'Y N0eԟ1kL'mwlm᩿r)#|x4YB5%[څ ȉ,$gQk)Ws__u,I:NZ3 >NJu(?$.Ly7xԾZȐm2eK_f|t bUJe'k0_6*Yb6K[U8{-poTjI_r[JˈNrP{ ҄..QbԂX!ٟiQs ߹"3ivF~uy)㥝~sqOԖ;I`J!0*)Q@HRh2P^3C%5:I'x# qqDBFDÊ1 [ x[1'a{x@^V79cx9%އfc?4-` ngc%XX/- ZK:TV>=iW4rW/#'asWx2$Nx [ _H8e2DO>d8GD>d}VK]]/*qGP]ZX ͉ tO81\k35ą(!;~nc'(]bQBJ  2P9.RcNu XD;mSkXU Y8u$5,2X!I$1atCG$U;P I#+sF6x ^U]jpVJ.4N;+Ąx!"XCc(lj)*cc!,$HkAL5%9SF$a`p%4U#m`Q$ -tbmcJQSKG釖ALѣ/@X MEF !lbkS(FqveHl e9v02rgwXL~a~E p;dsix[c[Al:$>f?5,ƺgZuux5y<1w61jU:_7W?>M篮WJju}$tjd`W~SϧHVK_5%`27ML wy.?f/zdXW%h]J Q4J /QxacI K߼{\M52W}@s0Q[,EW!F:J: B#cV[%jM-"VK"ǑoS1rfLuED$#ĸS:&–2iRD!xT'ȩ$ƱkOHg}/tPSWN*op6O4-)<^yVd R[E2^OK!E›59Kީ୘-Y"#a4RUWTTg)o~~M2O}dD{0/nbXɢxrzqvy`W#Tnb5M~~"C1Ό37{K`sf\'d,X EvDQb&Nl,4b޵gȁH\;@8"(bL6^όtJ`1jarRePxx=ʷ26&i&ў2uCWy!bTWDžݨǿ4c0yw_y]1)\Q˛<$pg|?$!e\]h6s;i boI 2KB?G:1-?iK4oW~;SP5oa7>c&4諿W83<0EY?H ''Bc=KmgSN;GIR_JU:AfKkix+~Sjg(.|[?݁>5hubaVOHO#(VJHwX3mJF3xn=;[㿄Z"N bĶߺgTǀ76^ )t3mT'[Ȗ?;8b_(Fp<;%sx 'f Vjo)=4 Ec1v]:es.٢Qs|Sv՘G=99sfpڜcP>cSs.NN@H^4Yxs ^f7UZD/TY,A`0XQ J_J3h*3w%/Jm~*k&J w.hv12v0=;FsJg3/=NJ'g- mc-SMEbdQ>=ߛ?z?~s,O0mw kdsxxa= 4jY0P3Nvz4̴)}]濘Mܦu_+-VuzJ^>LE8yZ~TkSEI{8?xs N+xŕ,jow=*jff* {CKQSMsi߾: 6\Q!9DZP2r饶-VBZ7b]8!1% Iyh[+nof!42[NꗰƯiŸ|ڤ{UQ*ό{+Oa]8"bBJ(mL=3ʴli[}.V \ذpflװնNH1y6r*ʴY4"c_sh؅,1Ďa̴1g6r+6,EJZV>J!S L_%d%NVlW5ZTizP|3{R)-[uTo~56'p f */Ϧ9f_J Ω>L.B\ko8>f=5[MO}vnd7QA8+t:skLWyU0&`6 ]y9- "%;ޠzZmLSvmXB!Z3ߤ5&75}DŽyjE5 k ukI L葈h%I〙f07?oFߧ: * quz3V4a`U׻HzV0bv5DQQ; ͶhT/h1\(쌦iO@q18,8exuR$$#0W*4L{4nv dd L K :2 Aq@^H1Xڳ@a(!2d,?6Tkm|_>(*7(jo=&+aku%0ӰWfSku}l z8Vk6ߥƏXcæPUTIԒ~*U`,`Az iL@&MJ^ "d /D)8%z[Yj#lҨ`-W2 rIbC氙fFq%{V3*Q0/b@ TkCH.4b- Їkg!/bI4%kL~j jgbNHX8+*aERr(B:IYAˋAzfK}nSX-nY^H@%eXϜ߂ ŔF$UI4,*zqQqJ+qH2*0Ar]uSml|n`+d(0U00E(ݎoCtP% V:Dw2ILimv){Iuhr+d'-H$,EQsDŽ0إ"fipN>5:SQ(a2/)T? >}? >WE`}pr=rhw"EH99dLʾyR!˴if8T DAU Z"AZ<^x0ʟ7Jj$E!,#ǑCT/Ld^1@brF]jP+Rx(`0 ֝BފǢ+׭k}klapIkͰgsۗdY /oCo_߲Am\0G cGգKr|L:6*7C.UKJka%"T!'QH%2x̳3KRy~2˻dY\d5G򮤙8al {s:DZH4~g'鷏Im(iN;KMsPɣs7i4ҭw^r^sڡ$r$KDgI)gPKdR>+VHv|~P!X1ڱSaֆ3av<*rb^i..J[njtv~7w4koS)KBWyfu\r>%7ȠwaO* 0n. :[kl LEZ#M+I >VatSM8(G%Q܌l XU!%!즬TPUjuTMdSNWCiVN*ALGmEէJ:mClN_M(F6aR5C:dW7}'sq,9OΒӍ<~Hu1DwmY7rc=Dd8S{ҩtНY,(ʺn|{u 5|x -nz)Et {,O)2f(?PE6mkM3uKLi͛TaG\gƚ$(kWcBi1#XMfpN(|ߢZd)F1FZ+)1\I˨#ιHN8(Vr2Ch !0ʵu(_2"\0NDZ4*0`"Fh spеPJV@v&Fs,WTJI\9ʽPNX$IA!vN)tI#V6#K)/J X)fQ7uāAO hAp#1 +t:6hSa(Ǜ7uhdžX 08+C{-;ʔ֖jR HAR`Fk{SXs "@$sIК^o `R#Eh9%ٺҷa=iי|~!abL6j>.aF\!<&(mB_nsO$az(&E|hc_u"K4RMvBi)KXEQypf؟KJs2$g cL:[ bɌ:TN 9\'ƚEJ$7$]}vOJ}6 Sߥ3S[{ߥXϳ{;qﻤ{Y;m|tGc``,rX;Q !d `aAO]l[{zi\/pʴb-/_w٭_1>X~y1czb/;/pb7^ƋJ//Ʋba^R E)@ FhCS/pX_LQt~18Xr~1!`bnbRLJof^ͧgB)c968}F[]_4R}Tj#N08MR=Nޗ"ꀾ"m +hG`$Xk% WAuPs@_=|GmIh+/u$T¥%ad]98WΩ; cRиXO@G8%\YB`Z5isnCNp œ!w@̸$>ԧQC1-aE,=nZ yM?//0pQu!ޛhdnvWrڧ<{Qyu~BO<#h ,PZrTW m*tՑgzdy[[mPXqDjJZcCltmgm 8?lqQA6~Fµp{ ~Slnﶛm1ݦIiofSI6R!M|9^g ]iz|lx5B,ۀ֝jmfĠ_Cږ= ^bȭ6nS]o6-&3 Ʈkh^R|}צX@'>רq;5YabKYX-T灵m,&_ςi/4[ \Y܎\i1 n6!T&d !\fo5g=3bmSVz FM֗&K6Lնr\?U 9eTtD4uiٝOMrU^BmJ2@{"9[ض^d vpyFyWAUؖ*$č.t ItI/!^⦋g6~*+j^14w:Zo z$`HIн3t(Q1ϧd^p3ɟky9kD7|>O$tRMgf6=,&WKK^X>~̨l0/Bi#Sf?dϦٟze{Ht4tc Xߎ£_ur,&br,&Ϊɩ@#M @߂ ٛl>&p3of& u0Oэ@LO1i77"d4襸';ՓBcVa=[%FļQ0c`e6).˿#y7f~XvN!N=Z.U G˵%1T e$n M[j+͡@ qDtc)Ҋ͠Nn9ԔrN6ƝCArssbnkA,ssH}{P+-W1R5՚O\d89XX2tKIj2!v|:&Ɠ].5}^.M;B ㆎtNg{hUneKƗ-cfp) 4W[} ) kCE7)}o{PJKjTa5n4ɴ[tڭ 9pM)%{qPuXK=MsT[ڭ9pM)Ei7L*jTa5nM)ziڭ 9pMMg&Ҵ΄|xu}8Y4$=F&RqPuRXG0o+ !ɴ[˚ڭ 9pM)vcr/[*NhwMLnHϱݚgH}}&"$kw]$>]V!}Ruݸ[r,S}m=FTRqPuRXG^TRB.>vkCExA8ۋi*Nh֕MӝL7ZW5!΢I<%޿pjiWյ7* nf< jjkCM$ezF%hڶwUʮQI`u}j:YID׆dfjT4S"AoாiR8Vu=$gOXՂD-gn nQIe+scnTd1 EcrMJ@D/,B]17* z2u9%!־嘻s n_Y(scnT1.嘛*<1át9.ܨ$ڗc.-Hr .ܨ$(7K*Qcr̍J#}9f1w9F%An<{9fr]QIP/Ǭ r]QIЄ0Ǭ]17* T5W1w9F%kѾ\u9.ܨ$MOeYӺscnT41+scnTn]Y!Xcr̍JAu9f(]17* xd[r*&ZscnNƚƎ/^M{; ."ea {t-wl>eq[Oz S0'W~oG1~ꏁPOJqZpFn%Œ͇P,dY7wYL\NOfγpƳ쿧({e.Q\ 7v^6(Jl Z`*&BXgs$c x.)`!BHZt" 7 kB`K9&LЇ8lc(@Sq-DHB Px5eV *&WYfCpЏ(܂JDPHj (9İ{"pK6+rJB*+<nVoqQj I 4Rrs %82B ZSP=("Q߀_ %Yy88|*Pqs% kp"$T0"4 *D!݆gW[8Q^ra )BL<^ c}>}\QP<(q(chyw煙 'Mx~Q⛟__?D1Sr/u߇`<DD~5eoуS}1-Eh48_!t\_Lg?,u3Yx GuvqfD(RY%?p@&bydM 72 B)pj+ek]0Ӣ} px|X4#HU83yV3zֹ^땈:p (<_1f$3YnR nvRy^IşR|q;e4W}@9-殯oC~>/~~T}xyEz4bx߀cY \ŀ#uNspyp i/ҙxMڜy9Aù&s*Ԥ&pfb| L144r"| u?fv~;2\Tg0=19rv nj.FJJ4DIB {#]-U  ̶ (l8_e6QbR}N>gϳ~ul؄\?z;؋xTS*R(gxk(zLgPe.#}v߃)^N>4!Y( nAY⴦_`O7_G#=?q09ܙ@$+m#K Mվ nrcI^&kH Iy jRR % mötSg>uJUO$ua:vrV)9FQI`T&2*iD1cȷ6WίwgQً%X,_Z_? &/$G7sro>ve!0Zw{ 5zLΑeFH+,\Ň$OM{f&?-V9嵶k38]z:o`MK.wh\>|ory&R.~>yNU+=ѱwWFc =ߴcP>:kܑl? ˧г2 H({QH\ )jH-ơEi}-;k϶ڷ7"0ws_b3kvFu#ԳEZ;͖%k+j%,5Rx{>ߨ 0G2 tE6^^8K)' ţu~<4]yp~4E2p?} (sE]̛d)48 9\9c}2جs:pc.Քk˨eRGǜaxTu&P>Z.[tJ`x`H:sskEQiW9-,OZpZJ8ůi i8;cəRJ1 o莞jۗccI8=q\$`6pZDX^Occ!XJդ 1sV\rZ_'vrlz W5qB_teꪢ'n?`xM+M J_|~=)&YsT'yJ^K{YQh1 M6Ε|뵇QSzx0&o RadS Nq1Wǯ_]zڝ'!.Y3ؿYq)ݻq)N]R֝f카Ydoٝ(lAAdX $>י!**g׭rK78$z-srn>K)1ɐa$"9ڐG Zf+Ilue"AY1s=[,:3KfB>ޜ:CJq;71[v<~ų#:.+V:PRP34LJ΂AY}"9`gpZFS 11Vb\\Xc[L?foUm4^z/sBd`%=NxH-9y]$5w:CI"@ P,(<= }ʭ }VGL@E&Ք,#Vqb5o}:eV"4;-w%AFŀ5a~Jr0ȡ O Dq_>r>Y""Gߏ:]oM8i^e-Ov ,Nu)>(y')<J;a{0; a3;%N-)?-4g$'~ZGGBO7sʺG?y<';E8y,`L:sJ!%{"ԀX/{:e$!:q*e0'lH9Qr]e 8$H!;$Ro<-DVOTi&!c_2I| ה6q Wu|&Q;"2E`;}:Mc=ܬhW@[ǛƁ7<qod~ȫ*!!,*($MXs!54 |]ܹ<8`G`b~>&Sql.F0ň!6@n`,9Z^MLףk}v$!XP.InnBW<^.٤3Z )!n"B (MPH^$)֐@SshR"/sʤ bY;%y9#!5+ g (SVqYiN7V{<]qGSŭW"#b(L]'=|foq,=Ͻ  m$Cn$)!@0OQ{w w!#`kΥc4Lq0Ť \}ɊBx`EUpɊ;YYgE|''JMy!!CJ*!Y3<(^SKe2QXHpMKshisDacJ͔k.KՆ5-  ΅1ăo0ǵA nsj jK"0M;$!ZE3$$Z٨ "yE#)Yg{ȯH Wc!^*) Ƹ"/uaT.qa+. !2bA sA@zbUڲK8C UG;ZoVD0G#+CF95K-:W#4sXLyTsU/Ǡ?iPrlX+AE dB#l%8%0Q,a^1;k*(Fuìw}\m}c AI4io ZuWGDb S$jo^SPIkpΤ@vǒuG&=%z%ܶ& t{2H0ۺ!Uk):[G8l#`02Q$R򭱭SUʧOR7\9N>|T sA s2 QGB LMc2#6ƫ!@i?BqZjjL O &Vs9u@7%h)0\T,OS9qô)U@rhu q!rkon&݈Rxнj{+OywEEڣ}󲒠d<}}A^$IƔ*MĠF%i1N^`6$zWFpñd2]98X{ үR3gluSAFSX4'W刳obH!TLn~4_//Ͼ/g9&r7PRb&fwO\D.+lmUԮ.y"vX kƉXԜ ԑcȳBɡ$]?18,Rvu1bIhOh*ɑ&wK$TjG[HJ 19ɭz\cT(&ŃF38a6-Z@4!!fT\҈Q䑱Ąt,Lj3@RR, u2.U0cl5.1?hM8Aw>.))m 1ϑ;(%Ibɽν\"8 is!ŎzQ&-!fX=-> 5;3_D]7Ga(|yv8cϷj_(8GcYԁ-h~:y#R雈ͱ~9=iR.Q>1@%}@OFhdzh[}A*@7ip] :$s$_"nѶ0 gH{:_~[~I_H{=^0T-P$hy372__?yb&+d>z{G7v /%{d+Ti 6ÒU>68qyk.5ϝwRbDԢaaթ`G;zZ:~0-*%;Z A9IW%W'S%b*x^xl˳]Ĕ-5 b݅A> MҚǢRûF_+./-/H(!%rQWk~Bl6 ]Ue^^m=p~Yrt# h Z ѳ?;h:QR=&2\E;DTnz~7ﮋGMM$I~*H>;x:ɎtK\yc %~b^z?3;Aat$ tƼ4m0)/5X< Sbəֱ aL9l APX4&aODl%Q䵵R?Ƽl$i2ɍH^l5E96V^*D9b0")f55S0&):ˬ*X$L{ +̗ƚ|NZk @9b]8I1K#vsg}iQz4c8CRn&Y*L˜~LxIO>&6YU(L [vd`U< C񞖭xjDŽ6~Mp ienmZaW6]kK|]:[/irL2gWGT# ibxZ*%t 巠CcMLge_h)?߱w8l6ZƳC a;d@M+u}J%5y _,T// oj|~j:|w Jt8 >#"{|96Қßs_=,(XP X-HUg#] '_\š d/` hřxiǨ1-kLo)sY]@kL\%u,H]&u\2 !TR;\n1>ͅϐ^&[cxJ:YGsb=V(v%uTbM0/f6܆  vI0 l-:;^R((UnxTxB"YKw[fҀ6q:6bş>wtg-L 'jDv)qK_#;[t%^1ZJӅSm2P2tN=10-y0-Lno %,>_Yx~)yܤA/T*$4M ։\fƟlAx븗25o;+eH.:\fP \BfJ o1c-2n3@ bS+ִ0PQyy$6BokN~.=Hg|E\嘖 =کҧ1fϞXSa&Ed`J?5$ rRSmNZz}9)({Y'~ڜGtt`cvUh $>nS>7o{fWW,S`:C13od2[?ac~X/}ZQksFs ;/=qJv3,k4'8NJ S͇+W菫޻XﬨfٴzʀVݸ(ZTPg#W[ґ؁݀oLIi]+u/-*8S S PDͱh`YsX@-&X"3!\GۜErPoDAiCZ÷#"9Ŵm|oW)EmuGm$Ӈ$(%j25U|*$h%-8p 9aiA(w\q`㠝FRZYE#@4F7f:/ʯ^[uOƄ;0{w0^9eQ i^s{X  Tw_fRȻoy1%'b||-XOY_Ƿ}Pj.WL(x{ QMH7ΧeDŽ'ScN“Q?>*h+~_O .52U[1ڊ$iVa8  Hm"h+&LUV Y)3&JSж$&^bdl@(C$mX,Bbx6 17eD8w!p)WvRXzߍ@pgxc9lj?gY_L zs&Ww|ܛه'wV q̲{Ȓuy-;&FM|_^ȞbM>y솒 DƬBe,Ndޑ৏ټP?p l"1ĀVxᾀHSQ fwyLӰof[/(81(#C?oֹexyb.b¢HS Q]?W(ՋP2X&ѩPe^(t\wY|iA5$VWEWrmpKTD=waqn6":؈FH Bx'ш)b0wiPBc[H Hq0?_WS7 _O8nFm8jLيyu˜QJo^-_Z _t,LیjQk㰿mFUj,Y~4gV@5TWnbmzKUV(E0U2a)B CMPK +5.Kn;eP0RRZ阰Yϩσ9þ4Iqb Ҥ wa2~Ql~[M*RhV6)Ad[fK*a}̊qx1BɎ+kEzdb klDGM+kfB<39<(+NLlcU)/L†aܝRt,\x$~)SN~B`.NďcRw0a_ LX>Ή> Qy6Obg~HqbGg=Džy$}?Tpt#Bm{% 6'a >eP\ HZWY#͞$o_jHlZڹsE[.;q ⧳Zq8M<ՀwYں?KLHyLhM{F-w_BLOUTswE5Њ t]ͱl)쒺K)?_E˓ܙx xM=]TZ+oAf;_e*j.ܟCˑhCv`Bh+:Lzk:`ʴ<ѰMk 4jYc;fEb7J7~mlǏΏIbcHzq[ ;MkJ(o_֧1q5ՄFN XNgYv4QRUS _zVьzNjrهs e[qaPǫۏ궂`<ŬS>&HZ0bP0FŒ?C:tD|_y r~6L\.jvi3KKM!\ȶboUF%'6,Jv,2mե+&`Lu&<qn̫H_vƋ'U՟kc)'S^$?nOt3$ݨ$gZF-Ta.!RBob'JvA^%p,NĘ. ::uo{/HƑ6*Ϩ= i I֬\DEkH^ ӗQi%e$ʬKKToe"eDk򦩪D3Y} JCėjӋKZ/)_mhha[/QRkҒr<,PkIY1t}ouܸ/ui0]p?"6AjyKcU`tM5^cӥOw<8]*Lm$s4H.e~OՋ)ɺDוS)cx'p% HpP\@LD_PF/mT$Hq3|dBV![1n}yBlT=^RHIq!(ASe8,A-'/Z!Iȩ ޺<^R & g q5cKمz=XaA/RY|fov[\-Δ0dv+<ذIAB؎%$h&#I@ӓ<,Q(8ExAdH&8B\a HB udZR0Oo yG$H98gB{x,s34 ej"(_>XRj7CcpHĸ7yJƢw ah8 DPA\Sn3J@/@>GALHcr<)tpUuH׊~O_ArRu .ʗ㱍;e8䱮ռ|ӽeY/(פ,&Uߑw,>0-x @Ʒ~ƖZ dBcѕCS8ξ]t O}3d:yR9`34{>y{dsضD5bO[9U#$X`P\hAchK PF]_ 3J Č sx x.O YFU; sx|ʒf`$Qjj }psb*}(&+,;ՀEOOPE)&]qϘtK1&0nOؔ=~*?A;pEi"B}J`wA|s]bυ5s)C qW !_R7sRx&9\-Į$ f;ȑuQ!3a*TYxo |m>1F)/O&BD(" (*ugzIr_1evMU /;]tgg#Um.I:ߗ!VZ<)Z( c0HI ^> ǀ{cʭ-dPLX=:A#Y)R^xV:En yAg5;"[}ΘWȳRzω7s t K%9xfL>k*|@aܪN<3ܾ406K$"#Tw i87:mJ$F::1KəmkpBc 8'Wm"9$A2PTJqyGtkܭJJ岺(琍T$˔lii/ :7L]xFփ V\#'Unp"0#$Ie[.j0@`3(kG2jdOviK0:kn{&m\0?3n-9|ĔԪ5r74ʜdXsLgoW'1}).fӏY櫌'0/27oӛy(?y^V VIR3q4Fۡ:a#A6M+l$-&_[3;X9k2y 뜮} TWӵp^)#?^^gpuyUPaJ*#R ZZ| !'l3EL{Cǜv)RAsuԩ^!j n*Ρ8SggzTz w.Nj&H=b\0Z4_m ^ў a  HWs1-5?Ǣ0H]o-Ј&|C#ǻkH3Wp[M_W3G,YVaXFv#/7'}H\]'lʛ?[>}="F(f?aVbZFp^b,fwrhSNّͯ c,UҌ\wcá{=|Ǖ7w2>xCflkb楡}뛻0C6A_D5D,H<"CbtJ 9+K ;Do4NwɮNMNZY=x-+ORhΚqG\tVk W}o C^;GV(8<}V]D@p@DMYv3CX{){DZ_\,ڜtU]9H^Vѫ"oa0?f#oa/Y^ǐ"k;76#,q4rgF1I7ix|ċ'xOfv޲t{ˏRa=q0@ot7@S>D$-3k(}߹)r Hlj2eew{GMߟߔk~0" AI ƂxbҞ/D;OWeeCr:[4/ 筛_흛f8kޮ ^ 0"s@^%!0RQS|C$z._|$r8xFY{$etP"&`K)Җebp“ۏ#&AWtt^9V=8d_LΣ'tHUtdHzÃUJƤ4 diMf y[PY PE%w\וԁNB[IS>]+aҤ{?Pm$10=KSt};Z4,E_E$2LG&Yt| 7VVo1Lڣ/W 0 iu-wq~jkށ\B-z*?90*!:)#lxO(48 ^I;L1 !|w)N *~ 9O;Е`+׊~q.+5j69bd3#›h `* 9G0Q%Vf,ગரn\]Z{ou3O2>@3h=z弎ɸT2ڡ 1twkA;t'j92iÏ0Y/ |EY-A]޳#e#ϯŨ/8nt58wN n8:54dRӳ+RhF:0mAqkg -֐Ɠ?fX+u3L+v9j *E'y᭵s< 7i!k疩 [@cA~|Y'X̿tՍc0ἷ0__ӗܜbލR2pU<ǢGhpo{hO2RWnS݇W 1;1d Q*X^AOeÊco] U7Dm@j{sՒuK;:QdTρhgg ?epU>(X\Ŕ'H"+ǻ2\w#p;O;:P҈n!i o=INԋ&Zȳ=+7 @J F@S kщ^jiC\{=Az0tP˝˜!p.ɛ @lc!>_|D7'=e .䬗ݝHk6'@A{.f-V]Pמ8\b yz堺 ,`;^|v &,0 /lid"§cܟr%a5XtV@02$@,r1{R, C X;Ƹ] @0237qߑET@zy9RԻjA\ڌeJ~.; =P%4yXh!YBTg(>"Ea! 'QE'ǨPQjzE>)u˵UA`2r0av!:[ؖo-ǼllZҺxtP,w^s廬]`7h쁐C=3x x[}kcN+s-'7=^&*?'&:#Q6D2jLDT/;. )%'A=m!0R u\id~w)4@IdM3'gX[kVEQ QkaztSFqCēݫMXV 40GiSERu(@i6Z:SbX1r`GkC2#+[Vq̔i +y̧ë -!T2 O'=(BtxapF`{z!?ފn 8A&Z #C%A/c"[Xj`F?QŪXg9z*N0?Hgg5DrK~^_|х.tMҌfǓQ-Z.r!nFy:XޜY]+4qx )ٓjR8w6ȁqi$,:}HDoL3v=tz3}w4ʰѣ멖p=ZT-UuƢazM]ʁ>i2L^;cP=NCpmٱXZ}T [V*z:q;EMt@{i1l 1d̀8T9#r(4OGId<^Fz(gL`bYR$NbݷlEQ"]]]UuWyJx6u#gRYtIM^k"O!TX#TYdo`qi C/o#0W _o, -X I+z۸b V26rREIQ\1ͺGI6PsA1'רEiOSNҬoB#Mn(nТQ!?[Q#ĈM\hDx"rvaUSXc oNHFkI sBd<JtpDVouY2*Fӵ4/D4ѹcAٷ2qK䢗.9 ʔ|pIҼI@[Ҕ5is-IFBW'#LgrsOR}0$J 0dbIP\ݓ6|%?9g==k)in&G96C zѤ/B\!U$!Q<ɿ6X\C2noMe1BL[.xÉhT,3 )HX0$QփbIƂ񫙠(:ߵ{YÏ5 k/0yX -{~t+͈I@gn)0)RiB`,^T0SsZd̤UlܸW6f0%u<S W œ*-(wK4Xy@'XMG,#+N&u&aC 5 Fsny 6Q,aq 7EYoƿj \_FSiUot}׾;੎4sY~Tyb$n1ɔRDjbclᬮ" \72[h]vTm[R)#ӗP۠+KNybMJV5ц}xƴ&w#G(F]7]k[)VD5eFSICnTFͬfԺjv%Dn.pF]XZb݌XDRoKta^yp>VG54Z%S42F){Ek m 8ftN2*[[,Ȫ ׺mCPgfH ֜jkĥ}Xۛ~L\ TspEPNk5K>R"\&h[4TM;(N:eqkA܋.\˿Tb'AZ 387 `p\W OT boXϝVK&u$+nj zq/&O+$R=0<#Ťw1$m:[yAU+8'uohUlMְl4t>ְF0"f{0#/]2UI A $T6ՉПT RO&bSy$c :(9f"u\ؿC/HDG-@H[ 7wzr=O./P`cۀG2BPzM 2?9'_DO.In/Y%Ym сYٶےT.FM樂c9r'J9G.VϰT1׊-|2w4wNF&hXwΆw:ndN8ir/wy x6\L7HRH!P͏p #7{GJ`Sr#tpϔqX1uY/Jp, jĹE̻E_yI{@v"MoƝ u;S3=tbQn`2NZs&fgzr^ǝʋ%%ZE0z%Z"MS'\WX]Tղ3} \XBG"'OI:nI+B="iE`pGYtAH$ A C$/<fDƆ#1  @KmR4sN rf F0 w8֪ U֪>1%j=\ _ƟȪF9ޗeEA[佘JxBNt8X.PVx&NF"uxDhm=U!Уh@ú#I%!KI:;A<,%JX&RKDZK Q*D *VOM徰֩P%S9'ZJE|WWoRxclV.N`⯰s͒?c2#&w#R#֩/kf֯QAvr0xlBΧ \:sKvFR:   !] )Zxu"ORڔۤgp9JA`4O_̷>`Y$糷tbwpA4AqO(S&F"X+1 p0]35=T;Ccor& bᱠi zU:TV_j&+f-1Un758.(d@c3qq~ںt?\q2[o~q u?qDME7#K;:LI+!8F@cIcS:>@ϻ8һI$bx$hLB8{qc{fdl!uay z:?}L^g^2A՛wWp6 9_4 LË&gRxx6ni2Ϯ6{ĺK{vƠ}>”k[h?ilf^jZkƌH\+26Fh&.,ސ[_fN|r~_Ftd=ߊT{沿v>qmu`B?]f3ӬAH0bG`̀ Ab"Y aVn $=۝Io4Q^Vʸr5ne8ϭpu I[whsk6 SbE[H='VH}[WrLW&\ObH?!$/+ۢǥH9814픤 Oa"4#81ArlJILEP8`8aOeU[=BeY";[4q4V7$V aEuLo5U$`Rc6Pe-^SX2I] DYQR, a)T!&&pjqg=b Gmv=k!/s=B v{O|Bcִ'=B.YK{R='3*捺OՂ5IVdCiY6~WuRIRdD%utb:;6xV$!h* 088 a6Ut: i t+[%X=ݝ(%d.$1=w̪`FVԈVxm<(y2Q6Fyvm|g}C(HB$,(%X d(XT?UbXe(_e>KF)O( ́9<])©J Q$kBp UPP p][o[G+aGU})y$^ y,}efY o5eY)<I` E]]_WW$%fӗ,`ͺG|f뎁YgH (Mw$GI [8$JNX4,$DƩDCc`Uok 1[f)J#::veP[vRb眖u*D}ؠ@Y3F ilț-T+]vQd-%;!AH jRRt*# JEQzʊE#ؤĎ(k%d.ha;H&ǂ<*aԹ3^D>LH'$L*6GwtYީKwU]]UեwS=M+Gfajs Lb.]b*]U3AfPe$II(dr294E^x}rRU od^Jaf8bH:v^J١ s"u6ίBc`b&*RmHB'mɆO2>ЈW8cq#y8u}g |p kZ^:dtZgi}]T|2|iyZYxPƎՌ+LVSfnm-59䃒Kͪ7DN%)(YhO/;@H`kM$F^2h\wEP"]B1ca#ZQZγSF2rWM+# 6Ⱦ6|F@8uw!$Ax5^YtިPQ>R䀄bL+ZiU % =CATuzOt>,MERfSc2o+<%ϖcՊڕQ]ܼt֮ە}oW~e^uBnQ.]Ԛ,^G{֫Kq\ztu(-Se{њD#DW&AFmt VZ+%,ET0`f([28@=2_!}r=kHPnlGMB+9ᬷI@(L<x Sp9d O *TKb "fAF^`RtVF:J!9>r!= 6Z2C:DO)}\$.\1"{bW ֵv+C)uO)~i}ѳ]C<"Jp[Mh_jnט2w^-)Ptƺ̿V/Z:G%Ax_65V\ˑSV|;mJ y1JQV>4} QX(8)b-ccS^bG4Ve,fcL -;X4VJ%XBLeDwj}dy؋Mv\mU!>3BӛUgݏuzP!LUfZ ŹѲzE_;g*c ٔ[[6olP5Pjk\Fӯ꽪VePN\?sUN(!FHep!u'RY3U#9w蔔ۤ=ߡ0jjO0uv$ىK#Gw8iD*$j"&B<@oM|Zܹon v;웻տF5юUz<@jK'uӿ^g}pG"y}S-_@gS>(;<270Vs`>Cv'>[2+JTF~W85DƑl)]|W0YAt )a]ʗ[xsLw ]?1]Knq=ι82uε^|8N|)a,>/M\0NO1!xl'z)_T=:Q -_aN[o)߽JImbJ`40Hl4ZS NmIe5ҍ4[X".|s VhN(Q㽎,:_~~ʀeʇVEl<[,~a8Wv ܋g}/bN./~<~1&{~_L+%k4 --T *UAdA^%>02DFdDGI2odAkC^u,4զzL SxVjSXm Mau ̓X*gĂ6^G4l |?$\z?Rȳoɳmԭ=ӳˋ7aazCR zGY{yra*o"} rABO\Գ$a'%_8KJV`k@J^dj'9 J7 ց83 *#<}Ev[^K@5MUoNGZta,Zl, n[[ ;=8|Ci!ڗ|mvtr~n-񦶐wfu}w!?~`ryWm9j0X9VBzek YF8R|}Z&ۇyy&?L/_OL/Vߺtz"E:3UˁT(u:z-yh[L>06Y?xͧ*{aVtu}6G~~0=O, $)}4 >Vil8ZۑhkCdF:m޶˓ЖQh뿇D1˷3aD8tmKO-O!3UMo_R* +>vE (@!'QT!@%S߫ Im`>+$^tbN O Z;Ak'h`{3RL"%:&!+#: :;URXfwF~`3{}^$֏-(βNb=؉\1dVL)Dm6ۢS62Rb/HiQcԍnQFimv{v.>Y^ ShRR+z]TuvI߅J%Sz>]{yz Aju05`iHJbj@fߝP4ѐNT2 fL֣s|Y`Sd,@ 7! ݎgFp=&~FpmeY<j9f(K<_Jq#Fc祔::[~wHi=<8Qч>),B(ewC@S"SӽNEP0% L+ndA%7q j,XsƂ j,mf庘T.)N}zI41*H8t T&+v)=(zBc G{} @sp.?7^deo9.;uzq]Lco^۳o؞rJVj{rU~7D!_ֲ&bgZ;(6(X2*_>)IIŪ-EAaM7V,ߣ+ř/ >eX/(2,hږ{xu7u=76c c W!;FzlvVh#Iw= {@V,l4N 0f,w$"D%u]J5 02FԽ.:FM+hETqVъ:ZQG+hE+ꘋOy2,6BDw[ᰟK0 KTvX1$v$Kk:~zGWf:ϐ&к<] %*I*y핕{IFTIT{ͰF!,r?2N#=P76rm#n(Kz \B}GݺD U]vZt(8<cgDߕ2Oݮ= D}|nv7%gw0D{  hEk+c=xr̥l:(i0rߜݦgx3ζ<|uA1#gq|>< S֭xG WޣrXMnimTk8Wf(7B֌\|^HDCeTw˾m^IPJi?UpT((i gP.c]^ԘQg>&md"ȀF^GMKٷ1uV-eR-eR-e\||PIɒ)Z^ %.#;At$,uVlDH9 ÌO?cSE=О@@1%란HJnҖ~7?,XgH (MW+I kD .b>dH%r6 MێeFn=fkro6rm#6r=r;,M$Pw +!JP h29!<(=YrS=ОzB'Rb@>ӛ]e_\M"Ip!cU/:UOxy#K$F| ?~ݫ~|bϧ\^PKS&iB.o9Ȩo W]Lё/oY'{;_4 ~s$XcuaVW[B׋ijgd|l^wu._^whhϒ1<?xXH1.%MؽuҰ%Jaeb| -; ˇ2oGqp>h:Vcl94z 2 4b ^Kcٻ8#Wrͺ_ pp!_jҒH\Y%D3UtS]5$%cA*@Kʄ0CBf"OR_GY,www;]|\T9999};-9:@v?_tfuUI qaBD,7:`K_?4z?=Czw`~orl<{iŁXQ֕ǭgN鶴#E~Ffd(3{};bfTqJj=$j_!?nU9U5]&FtB;W!-ɯ)NhѾJ9Dt;@V!p!$WK!P;m-j:TU$eu)%:Mc;U<ˋ*FS/tEz1 {\EO=X7s'+F\X][[^n -6O/Fk:T\op:R@~-(|ws3t9F:MIX: 9[)[ .* ;g7?l/ô s5P囗'5ؗ~w3_|^qOX~҃vv[?!:iƧ%9d3W''+]I:(t3>i&Ulp'm{C>}w `FG(c DMZ2ΓkAPSL63:{)vD"^wX.ou<ͫy9oǛp'28ՇlPJ5,S4a6unj4A!iՎh'`9MU41Ԭ]7ipd>@yzHJ&ĝd%MĠ*W,F^W{ 4eH&ڃ)!b).X!L85!N26bAs'%͇-"nk}YفLhuU$N @76tGA=̑R**^eoWFB*D>g#Kձמ2Mf<d >"X !]]p'..(]R95`X%&R!jR'z3˪:p4ՉӾWĐ cGu NveEY&UQCA /f TuvJyBњ*[d@P%R`$.! d 0uO gd].uٵeR]K]v۩!gDz=I|D DPRۇwCbG, BY99Ć{ rs p#b?"6K$삏ܙF!)(5k9k+Vyᰎ &I!s d%rѨHD.%3Es,Uy+H)F wWlGdbEò҅\uױ7 RIqź'A,V9ːrm!l@bPPJJjr_;彨heQ.ݎ~mki번o;U2⣱ ĜjwS$VTŀSEENN/O sfP)0I wi{T@O 415RލCN^sg,vVd] "l ٵ*ݳ/ֶbҾe\ BAxHלL3wdvFkzD`svy}s ۶Y[ Z@q|,`;#Mh(T(Pb0Dd]!Vy oI R,6f̆'!+s+gTn uh"EՎ}ہשO/Mh, ŸU^ҐeoݿO]nC|)k.]:1By:B?vvr~dQϯoVӦ;nزQzۋw~τKeY 'fG۔YЈ8WP$ I70Kp2ɦcީgLS0)yVrI'c~x2Qbܵ 1~Q f-; r ޚ,I%<:n1! V;\=SZc>2/aߖPȢ=5;dߒYeQS䒨UDmʐ(oϔ)YB#L!^>[X e()&mzZ#C\H[v۾1Xv5c+!ɼ9{;E$~TQHb6 9,;gEp};YmZ&y˜HͭƤ B1F+lL׋Yf Zڌ1&Zшa1mߘbC RGro`!2+QҌG_ pd!pbl}hi֎t Ԋh,47$l'N9YƃA'r.QϏGGӖGR78!({x{a ZjD4Z$i+-~Dj-Nā=&H~q(ߋ[9lyo^wt܃+h4N}CE)DaLڥ'Ăm;6=DA#WԘ^bw{vH($c ފc>#6V=cJf+&$r=R}o#@1I-طVvz4`RZ>qVA4>?-W %u;Gv44odؿ,ej fiᢠ?K{8?,ﭵO2VȄ0s43@\$fLZ矚Z*\BƨyU= 5;pNoXuJZm$=YG\- PbDmYcĬd , Δ\46uyrY4l'$A_/+GbP[UpV*$ᲨSH6"2 .U44K3*^)D]2XG]ldvzM)8cq!IjbKuI\Z,p Il!ʞ iaΌ|y1[FdJoHQWd"Ϊa(1Dڹ"j4ͷq zFŸʝDd?啚X_ηdmNX7e`4C) a Ƭ-Zo 2n؁IRX~kONR&1έ?ȴٿ14YS0r/!'[b(#M.x@Jq+t"VR1_C)[<E9EHpE"9 f"B;e.t'e3;b[ #vI,X߳(06˪֤43-Q,qn#mϘ$ Gp)V=<(/9{"|%+I]KT(_ E i؀j@J6}G!!y۞1ّt--3h#clb32Z^FCOFF֍@Ge 5phQ2k ;"fRtDn.NnYOO-.аg>ܧRoyy7ejuu[Oz'Ϸ"UZ<:|Q?[k7-&eejq8zjOX?Y#BKg̜18-I1OL98){)4y)%dD՝9{|)NE19UeS:Y5uT-ȣ^/uߖR̳1Hӟ_obe3KYW>0yŲHBXT+֋Zh\VN\޽wjԞӥ#Xbj\HS] NCv]Q1*. 3ElRA5Im}`eG pFi [sBȱLjXGnXU(yZ#<$4c畕$n*֊jL.fc}A;K [Y  <٩k¢5TFɢh$T-JEePQjhȐWǔ2Xy"s W) :sSJi1!ULb+1JQ=1:`ڞc^E*Sjc޼^%XKdIQL~jE)(b.I$Dt29Fm)V܋.+cI$8`6 F3+&4CU劤Ǖ*%[ܲ$05XDk%vR`#hIrC+*hjB!c\w V+&v;}&9ȥ- fL. <ޅAL_ ]$Ahln?^hphыf t ;@k^e~$( Mpe Sn@SŋIIm쉎װ.N4=5w ɝ JM@5 jwOYwF `O+6ӎy=48QH>_kN̑DY7YuKd5VK)Q@9=]$q-, ZnTd'Y],X 5=~$H:}S<0v(q:$%Rupj[8ґfwjAQky3+v(R^sF3jP {P r) F<~.jByd&" xj`~?5cz 2@n@/{ⴌ }CVm JՂls1%\ F'!e%YC Hz"c9YLu)(MړpB8aQj'ɐn~ɍٹqwf;崐:i4AOg*fHG]=NTdAxG QP_HPQ'Y 1/2MgW۠}6^OvK=7\Ti1 ZPC+/r6xa,nV(CQC)  P49D39(S~U|"m$ט$<{t}@#)dAzݮ`Izo,֚eָzNv!C-GV^[$3@^oa-433jakx횪(%T Z4ڥ+ ZXh MHDZ6 n;_ `=œ0W;ELі!CNb#Cj,OMsqU+ Gc K:C@HxIhfvy;'KJ&rs5!^E}Xw6*V4,SSsDMme)OnPex"Ę){xW\mǼ1AJ2rծIbQ8-Ufц>$GZAf9},g'v ϥ{dwT@aTVB%9؜DǰP?Ze-V$L Vx%pnz̄swY+/]L~_>,nOdp~v)ADaz[?%_wPCNsr{^8uҸNwŊS}[]/>'RM$5ON"rqdli4%!1 )ƙ^|HOL_H猧@.ɥHrII<ǥc m8YB-?DnjUPG0EPqtco'WĠ%Sv:h 8t'>F'? VyP;ߺ/j/?:LnK?N6K| _k_Ɋ lNwqDxV)xÚ5_1kb|ɚ/HTeZ0y'eO?8!2 E( Ai- <*N|" NjzTo/2@4VQ,+git:srS8t5w=g@wyjU:/lHjC8=(NCJ'UdiđᧁU0HAcz_XGz V+7F4 dlH #O &q&E+|oJ,/Z!-hyfS*+-ĞfutLoޠ,i[`8~ٗwEI^kIm$=H<۹E](Xm}g3#`=p #6t^ Gsu(IZ:bb2Ɉ HUL;=P /HRkv*Q`U<;tC}8P^ hlG6> ƹ\[vS߆S8A&0y)" ǘ A8#ArʾQ r˧i![iƦ9ƶN5ӬqNt5%KWyN]u}uSd-Hz*9ʍfJoNRyq{ty 4cv[eKL491\w2Uc͎\JH8)aEw9ؑDaoWe>%A]cYkMXlylb{ ~sgWjiɩu ɲrONkE[R8_{iTܾ_$OtyoQ.떤09zY[pd~fI9dttR{OA<ݺ_ p6iIY8'J,4$HY ,wpM8Ln1ĚE7߹]ʇfЩyi, gWB بjҘCδ(ohŷ_o?<}؍1wM\A@*b$Q%՚R*,*)JGRsR3@vZWg%pOfr ˿|]X>?G?g_}8 i>=E3lzPh8MPyt}= gk`FkЉ-pkP>8q~Ќ#HBpGr1*1Zr\Q^hs#F9ׇ5oY~Fuep1XwjTƱV) ܿNŔ Kb NCkU' ЭS}2KΔAeUzMݘN/yC<6pxaٟ.m!{θ xfXa&*]ILjZ 0Md^m֦>T}^?}eބ<7TƱB-j=} I4FSE-2 fMjlC9~;ŨɐF q|C }ۧJ,4ϣ ,'ch5nql qlB?7I_X <`=vy!䩖t>߇TY7Hљ`܇:3`0NƢV8Ŷ+.Нx/5Ç82M( JJN49/ ;8W D&mխ ϙ'paʛy#WA}6x4fDBCJХH*Å͡ՊIJ4e\|R IhFh-WYRų2׬Ԑj!IZr^V)r) l4lvojhf  27mB4XK*~Wj @c(xաDS^qoc_ؗ)_O߬4"7g1 _l첟1݇8\g"\;֒`^<{nf9rݛd7W hY;韯*i oWO69\֨\@+dKgMKijij7 ?աvevz4r&ZnB/Q5%UM=FYJ&ڭk6W/vPWb|)f\]3@Y2'2'.sxH&qkTJ襓c^~` \ Xi:'3`Gp}͕5oYkݩ\P_"ȈEMw@|` _>8"֒Pg巿6wr;͎Mj.'[]n/[PjGKs[g.qAz;Zb3هM 6s ?oj#FQ ;5`k/ }a( 02rFӲ7H 4#J$-s0 Zh5HaD_b5E-.+JJ2TZgdXAJySdZVRsv6.95w yקL5k^*\HBk )I̋Y/.KM[֗Tݼ?UX!#_^kJa-}통/.KM[-V0xv BFDהpu%5Ss|hШO8'[V qmËb1nVt}𗫻e9/+daFDۤHH~*f Hd=#uO:J>\ݗ ^^zW˧7쨟}Z.>O&*],2բ}(/z^XտZcdş,j] 6RbVzܬ*o޽p{Sl740*≝4/CO~lrwmPs1@Pv*'HV߁NB駭/?wm@dž[-??pbox_<,ooJV"˳*O_}e~GF$,zya͞4߲4'l.5б뢼/7{mWӏf;*mOاDZ;m`,i%wyBѓ?- ;%Nn;EΕWR1ڢ S  ۋzHiPD甼.?/JtApM{PKZO @ 5ХZW#Z[+OIEd\Pt'DPh钔@;t4 RB~ōeYgKq )=ZqVyM7G5k: 8YWAx?s{. >BjV/iwlgayz^=}Z$2G$c$2CidXssjm}rW_d#6⻉oOf 툩RXPydO<ͽDI:jW%7+C͔,>FYB<ؐ]wooRΓ=VVȟ\}S̹*"3 Pi"TkЩO^ڃfpxD;'֒pҟ?IAE 3GG#Qhp"A޴Ċ;+_MF))EL%dZф+,ӬIMkL<,gfDW:0oO:;R2}ˉ#8MbetWųͦ`U]x?=nK4}oʤ݆=L$'Zg4֑E]ƺ"ؓGt 2$ \Rg.R%yEH"2E2K*,h*~E\~V[3\2"hw+~89Qs'_3P!$?j?2.ܵ$}R-y#>*Q;i]1G6-y5F&Ig1ψ D>znznJ0Ѳ2MȡȀ&NUUjrcQB@By//c͵y@ <Ͳo/=B[Q _. {OE&d9y|WzI ,V]D%[ԧQ޲jI.2y^7xk"%Mַ5KTl~ӤFB*ʹD_ߍ_%;@UOckQ:ȵSP$΄DK; ^ v  {2q="cDmFlV kBB[}c o )**rFD̔*)H*9eXaQT%bD,oE^M}_:UהGǙ|7a{{&_4AI} K|dLSDr*Mg< H>x@@lo]iI.牚F1J_Ȼ~R̊oU'> 5玊3>3HEŬ)ֆzѠJhTC]_D6. +ړCT9ITc&*&~WCVR<*5zƀsޞbH Lwٓ+ʟ.WhOc!4\I͜[Ј 8nKᑟa&:՘Pr$,yڪ3c+v9 8Ӓc;f-(j2 9#"r:U(0^SRm\[f!!SԂZRؤnIeuARYT,g`$P ݁ f߹ H%b5A#6ّClBSM$FETᗙoo _|$Tk$2r8."L ( |i'fc`ET)-m^{ 1&7i٭،(io r Ɂ%B,S]ۅ-T(RYd/{[C.1.9s eFX+8qq 6sԷ HϦe s HvT.ځ"("5Αw R?: 9 PČK@h5nLuthz:eCŪx$ъ3WNH<ǼZV۽֮[ѭ;_\SկQ],Z0[YW:(o?&vw/`$; Ū֍͓8mNr&2lTp9m (jBG2h&Y9D'󺧦_m#t@T6S0yojuJ0ڪ8Vda@:`/c)M G5cJ*q\czBK#uiG5QGF+45$؊HՂ7W1ݳޛ~6CX˖"_ua1G\6?iՓ<_fd_\AJڟF_RZx:v@֌-w&mk%eDpu4n]koF+qfM0wf vvg0b'0ȒF8F&e%lm>Sbu:&LSlR*/Y*O&DcK))!ՅS˜c{fP$֩dVU#•e5yHHw̳'>@>z:[qv4nVg =PΜP(#ϥNP9j@\H`oBϛVBϛV>>0ׄ1:"TjJgk =<<ԱBh1'NJ [P1zo3i@b VLB }:u۴mm-\؜xqf&r=٪aKej4&Ǖ 9;}@n my饶SD椟YO P:l´tI*MU,%VdE7T~N_~=Z6ek>B`1ݰ;ۢ}{0L"3 U֙E&~ KK$Zl 8,`4Uz Q=niR|ds(IP&yŊN!OIwb;Ëbrr=zɨ{|v\fuKth ,qȁH!Idi-ŎxPyHOn O5=J iD¬BN㐃[M}rSa?j`DR-Os!O3mL[Š|(hG|!QL{vO VFht~iVkcY2ȃG$BRe %Zyra2]*ÒSB ϭ4g?u:T;9t-Ŏְt6ҙdKu67jMHOs6[ځjyuĬTqB=شqxWB}VA|_~l{˭WͦP27Y"˯ Vd=K@dӾ#gd8B~H|dSd^W{Y`7cy!g_6jR -wnjU K3N`$-!7'I8UX0a`LTySOI< ^b*i/ު%hŸWq!OFSg;R_tQ%"g(΍(`+Icq*艮?Vk.Hn`T-Ŭc˕MX,}–f[ZJ]S;";?9sDPKix/ T-'k8a;`1}?wF]8V5c.ȎqamT2vZ@Z}A%HDM%dm7mMM W?ޓ/REI+Mei_NIw BnKԾZ8Ũv߾Yxc7=+ol%I\&>.|(qǎ}v]iR95Fx!. e\[/a+=5eMˆ[Gꊊm7<#{GYS\y4#T6>/ ZG..a7SM`*^pqbxuT]ƽոc\a?ycRQ &JhƼcf \jAuqĒu<2iَZJoc!gTWKkMF{]~]{T%@ZMToj/Vj}Fώ}ɳKĥܛ9[|QlU*ԞD?C/5K^Fп4 lϻc-;tvD^hx 4sc{TߑZ?vs(#B")*4E? F nJZo ᦳ9Ų58c IHRJζW/3/cQZO|=<];S~>|5>˻Ml s$I5F-EDWOԑ3qŔb%n@gh6`\ A w[[?HscO/޿9_hv6Nodlru0 ISpa2V8<ȓ8ƸqV c ǿBC|Bn?.(_OZ{__}Mw!xzxP={qDžɴ'`K^j Ӟ={\Zz'⯂8+ɞK%MSKYw%X;Ds*\7eOi.}ч7b/Ô_}~^3uѹȏbPG_͢4ceLvyfr94]vMnϽvs/wzyvgܑg]1/z.[+f"o $S_EC{?y6(j'X \콥Їa?W¿ }|(؇Xfk32i4!`T_90?ջw?ó|ڙw%oeߏ4ylWOc[*迆mXͪz.Q6~˻}tTmb?`OUAw,@_GjX^o|05|8?i얲aĞMv0qkji\Ri/_r:VDbTm]y*z0k+dK'BR6ֿΆݿ("bP$0(&MZh#qJ?k6H ' @p ` BCQ-ANk6ޔLAAts`l[-(S\&^BF% p^EK[2L { X0- /PdMy6HyOh 贅_`Byf{@4.],=*tR%/LǑ!u*zEI>ZߒosUgU? ɷz؀PRypH=Ò'A Ps%%u4 HKA1oI$߅^ ]q;H/\ïR)t_8-n/P.z@3ni<.COlZ2c>Y[23}Y#h8yq\BGHq!M5q? O1D?W[_G+h'B)iy~D1cGzei`1%Nϳg-pz{ڋ84GeES1f^,! ײ&NZ߲o?hGT&L\>|$NI'L6QIP eO+vTyhr xs,+96aig y$I&M.IvhK>kp:X `t(5L(.L=V$EpMP[ ag %WBw(g-!Ć7aϟ2 )PdlUm[R .ʏfDfid^C4}-~5SיcT5Z`M:}rp/tnU=`cibv傆hT;Q ð<e A/>EK(285A#L DJ.Mea.iMKx3#8@ iD¬Bi`'HPMH)O)!Z'D`H"hw`G>Zw0d @M jr8ptׅثU)9p OhZ.78pxO\pBS[" 1Is?s ""Aq}pHuO?%3Xv>ߴPG gښۺ_ae6;sd\0k;S[5SNeR*\eTKo㐒(^A rnt7u@|ex> ǟ6 o;#3 a!_kp.RF-Rg)ڡP ؉x;lTS!.zQw߳[_<0s1^`1d5jrfeGm, -U*\هǿ4U++4ʫaf}h3'| ~=U{[\rWZ3%cA^xC)\DGPйn&xPxw޼2Z] =0|ědƩe$3Dt=\'L?aՃFL\f2z-;&~>I/g qIWj40NTFOײwnRtOwPA´.-zxѮԆ-g{TZ1G>Xӽd<GZvҌC{^g 0#YЎGN_xkW7֎@!8pPLUnD*nj3;yioo $r\x yvm2xɠfRP ?_ ?V.'1!'&Rgs&38ӞؐLfxUFd6GK0sg:֙h(;$2 %jC~i.ki Ga6Ko::\<D>B$ˤs0&WO^R߻N˹^< Y%!wx/4(Q$$}phbĂD!Ys MRw4U An}t.p&0P1Hqbr.pBr)5*Vh?͠3]$s%Lų2{j V;[4%8 '/ENєtL&"A( VÉu 9!v{x8y3n8F)&1*BꀋИ(H QX"S|O-I4&ie5`wVFzܝQe3fF͌j(Jak!'An6xytH( 5Nb AR`h2\, nn1i `ٞ'F*~AP0[G"AihikJ9OT6 %%.EO+"8T>! dhf՝=!{a6O?>JE4kp:OE%JúrI7 -X7`n%] WLpXԥ0LyO6nW7ןﳽ5{p^ү;+9B^ ݫj6II/_F%#{y]PO[x@ݎ>KPF^/]h0Wf>h: OJ5gف7Fbosԕ߿+JWW'H g )m- `+yow<%y2CeSRneڢ]UBozzJ^`E0?V,]U~R.̚9Qtkլh z*nOY(6uY2%',H0z%K-\x2[dgf(:bQmoLguv΄оw \apC[% Э~[qjFw;A$tv2Sm- l'&AmˠDnpf)tرXG. cB{RP b^&.ލ7 85N2^e /|y~:/NLrWB0_\ @):RNjoBrNjLRjwQ%", :\fkprN(̓5qRK‡ Ⱦ8_jk^[߁++f5̮ Ѭef3U[ .%L!_|x ۘK ~l@s}WAug95'S)U4W{wp48 @7iN U r \%|ު1'*FHN\m6*.j޶wc!,PysPQq 'mbu4bAXaq"k|"22˴;ۑR8Z\QgVRKl%YQ&fOb/d0NS"%1wڌ?'9:*%rwiQ.n[玙( Vqx")*⽑/$Stz?`P㫔 VrҁNJYҚ"ÖPɧ4w ['؟\f}\07`0^_Ti wMFKƼVp_MFwvO;5K-7܉Irk\V}wQw+_Kʧ▔ӧ{«h%e9GF{juAR9P/F\ $⾓CO4 LjR}')V5y6aL5w/{|D'0qDƘ`N`B6ze߾^i/ߜ,Yg6YlF؍X#Ue _MZ4!1@.b~*5朕.S43P6%B͹BrkPF]=Y 4,5=µZ6u*xH޳m,+BpS$My Rrwi%E$KY$&EQ5"Ci,@!PR*!bX2uFQ3"n#b{ c)K~4.҂4Oq`4U R%SpA,c8RE8^F *@pryuN"@@1MhEAiLuW(/$T " hCd;ТTy$%ʂjrUuAGK27l1̬%7 UN$~j K@*Εv޶X@EkSp/1V@;IXAEЍ?r]R"&ybF4tJJpR2UbhS$Od&1`4LK5\R0QRP%- {ppwXp/-`nXXՂbʲtrG9(,]nA3QĂ& Z,c BE!1hV1Po:o&~DOBם-D" 2w/SYN ֠D;"HdFz+k41;ׅ hO7b*Fb pO 5KRlR ҁPRha3FZRMm" =REbQ'S* DFL9g)Icg-# O8E'46qc =IRB){PlCrǯrOm$ETznXA$N*H >*v؛m*HQT"m \zo,l۰p /,1DX@4f!\GE- 4b]f%vCQ.?Bn1;Z1N{J}<7?ֵnM \H ?3vY0Mf˶e)Lm{Nxdf-o@8ia:-u70Fs̷" nmؙd~Q%r0G~f Q71nn=y:_O4''gIȴnEQ0Nc{G?9] 88cL1pOF0X}@c0"|(i~o?qǗSx`S[zIvx \Pp ?/fh4}i?AF=;'$ɇmO>Q%_ T{_۟NB#΀k&JuHBM]ƛ^܋ir֜jq !h|,߿ U?JE4$]xӿHFIq9 &a5"t5GzjƆ/@9w\.n@VfLrkFaͨTecHݵR)nt1m ^ EyU<\`+M*䍾If;:)D?V~3YoV^Ϧn=șܪy|/6_K] `m`[Eo+NZh/z6lڴՠ2x!֒[Iviۈ@Px[6?}pb͟{x"1K3"͗T9p2<IP*걠} \3.+8``87YBݖ[~&t}4lH5 گ`QtIc8,1:\Jcː4!GcGeY)Pl%-1*!R͎UǕn,J0k>MGڠKQ`$1 wRjf|sb¹AFRa56NIb5kP[c0v< Hn>ދOTB]>b }7ryǺ\P_Mf".2hp '$0|gBQB):㞒J+DAn*R-X0(8:ۘ 7\Lۚpҕ>o/ LWJY"i8j ǣngB e(g '9ʌ0JH&9&ptb%+(F(cc h Չ-%v:1u @K<ւwKq /%]u0;*Va*fak鿩@HRsi‚(FvT$LNJ#6p;̜HNP,(D8@MѦpPIS*y4(24AghJ4.Fq*aO= )@ %%n IkQZ=nl3B5LBl(/pTUg8Fڈ3Ƌ#ÒW( V0** 'R,"R$UWn,BŷƌW9vƄO!YNd9&RQYQϞ=K΅p]q|njâd`0dF|J*haT uZcG]M^ݵO?MkY*b/e6oݴO&Op\`x8rV%Û~Wcj3'-;rߚJ~af~y6!~Y>i? GPKP^7"C\TN.a6v_R&vbƽt;Xw[1}*@RsV1tC%Rr~oP 3~'1Tz-X$]5OڨnS!0  s1a'(wE /nKCnp`8f Jċg A&J9.*ҩL9P]EQNٻ_SBQ"@b)E6q4:qlo2j{i?MJ~ɥ(U]|`Đp̟=ˬŨyqBG ~U)lyC&ZK6r|g.#!*4Vh܆vBF (G*{4'F,ץu6_qHeD.#_?>{>_@O|vBt875UmH[ѶfѿV1D֗qoi%!<5\y \ŀpЂ0e09`+KEbv81E<)'Ҹ$HԺZ#-bweR(I4YcL' fY UL|&bk᎑-5Yjsd\'VB`D [)H3朦6J1. ɛ1,] G s.5ZaN+`L#&2Xs&8A  )jr5VJz3Q)*jMFUo=V}\'V:C 'JFJI%d 5nѠ F(D|}%yN,ĖTc9C|K-JOrP0̧ݬqkKθynbQi\bE5- =7xxLh(bnѠR( !u8fklZnذTny>` l&Z-fXR]4S7<(4JPUNc$IP Jis k/<\8{0˸~o-X%QV\-rT\0#8{PQp ,Qhphc8&t2e#$ˊ+%YF` _dD,Ku̲I*.M+djfK0԰.sRYM&T( V @W(,]pfnPLVt(Q׿{5 MUօorJH^+3lƽh9=]0{vǴDnaMs%+ fNnԓǚI&qM`AHrEh %$Mj֤1Dat~gi0O@P&i86g+|s{Mz%^v"n@\Ĉ7&]7F+Ds߳ O9D~|/%abG]XJ±R -ϸ1sҕIq /E2+9W)lq*!9IEd^PqxߌM MU18r/MַrbG+)u]o7W 9pd~jrE;$hޗ;\.el'M W%]EG%pf8/|lձR(X|M"@x&Sٛf-UpqVQC\7̧7쩴Wu 1GsyB:Zk kAkc9K%FP1B/t- ?,)eK8bXfUkU1A]V>0ږ?s=QVY 7 L_A0Ď~sDf/ݞz]ޚ?Ĕ/de(+s@z5JPw&wĹ>9w6r??KW2ƨ!7Ac~za544_|m6zMDý/]-ˌ4s&_N~mg'>\ξ<ӿ,ez'SRw{s)l%cpm@cS1fiěxS#@ %x8xjy2neǁ>26:-BiTZ"ivH1;6KYܫfVF<긯~ӿ3P"M^M0Q+&}S/Z ՆDӉ2Ѫڗ V,Bn+?ChꦠE J,nfPob,;ɅqQh(=ElZ2 mL,L2MڰEmؓAX^n*,`DⰂ) Ǯ?FS*I-njYkBHKR)Evnڝ)Z@QP2#V󬟄azmgÂ9'e*{㿬9Ja[SU؜^H߿Ge^V`_ ~EUK֦Vg|3\ǟ /W+.06rDj?}R"_ZM H"Ș3Ctwܥ__? _呛(Ȅ&rbneǠ]hRۘ~@ Ĺ]\fOg%2" T⸉ZPGUc35hE 3:yP*tJy6vYΚ=B8c1原p`bIa'ue(--'[x 6ǃz}7IFg|T2T2g5H?ʺ|:QMGimaG]J9lGl TOa‹BM. %Aɢ R ^m q$cMC`g,h.eC%!zo@ m*FAoRPfYd, oi0MF>N>P2Q\ЃZR[7! \Ntf^]P'0J( ;b~c*aܣJX\-c|[λ0P"4r3={|n:ε݁r]8aBj7ouIK.k !Q`b3 w:}RTAe섳"LńKp]q7cXۤ)lܤ]sZ_It~ZIQ Nd$QmJϜUh\FUh ˎ/{ t.{ U(_ f D;3)4 38;ywrx ;YW{Of2t$BJONxOtKQƍHԡ_&K5RR[*2"SfVUh$/g/}gFUoəZW9g-яq!FVZ]7G,I N*Uŧ_'E,?>>/߀$8Wxk RDx U4vP"(}a `A;Ouxz򯸞eUǻ&"wW'f'x,0}}5 Wc_ڠ. /5FuYme3Ԝgi ͜i ͜C3L-%Wح u`ī4ѥ$ :w,=S#9Vje)>D!Pk)ؠxyrnj+SB\ >#`+'){Q؋h0x@ל$[>T%9Rsc..B(Pq§k I*N|'}wkW8MA4켔[eQ)"BP<'@;k4"-`\rx1[`dD}=P⑁:*j;K#PMhPw5`AGioaE9AQ.ʖ4F1Z[Ž-#(Oڌ*ԂUP?%W\ٻ/~fO>'lnK")}H,[)O:w>¸P1fQނm-r Ɯ-zT}@d\2%!.\;%K Aa<#ϐG1McEIhN '7PfRm6[ӆr.jw<ROc 4ROc H]wڔ𒸢H@͆]tE>6 XcB*I_r@}h$W k,-,cJ:(uqD[sMX&,hN\DnLWO~KrVrp`JaN)ր? : ;Rxg-7Zn zׂr!]%$=#sxz nqxI=AL@u Y VV 𶌖2 :sč(&"J[ұ axݧp?_>L澰 d>]5IldC؞w EmEqKU~7;c~rV oT':/0-yLЌ`y{HL!.Zfm}KyxشF$ēP 0->HN\<=817OF-}惵=X5ADM+Dl9Ltދ W# ӢQfќꁉRf7D(:iR%1K:;L)/)_)i?W-C+6QͻJeAy;oZfsVGt5 @:\Lt!1F RAjգ'fr.$dcN4J6pip~Vjtc?hZhu~f$TÉ=1Ppа^|:@WH$#$C)zDηРAcx '84Wpv5AUҴhUS-uyf k-,Ȓ!^YR@%٫YR+׷輔u# )2xr9yTXa0rvl_${u9J*M=䐫v^_t5tBApARwj#+g.IjF`5hg,?] n9+ocqۓE'弚/qN?eqiuwjڻ9< ܤǠՂ Gh/gz㺑_i%؝=oE[/8nn),ߗՒsVgI,+Vb1M_Sh,״?dۆ:5b3MS"lz!#-:Q8ӥ pAaڧ5y._|$]E>M6Mն.uR [M![jwY[wVE-+wڻNw6׋jK67Y.so6&4!Ѷ~HݚO{o S<ⶋKCkB!MǕf蟨Ǯ{O8s\vGu@ W :9l~w=NP^wcm N/g]g^Nj3Ac <-U,Nlk":h˝(!EJ!4UkXz"i>#Mű +4"o}rxT[{Tjl<,}lXE-jaV-߰6ʸWPam\Ee벁X}DwEp]s<-&!nn/Cp'חk[OR|Ak0qsqvwo$0Jҡ';9@J)6UuןiK!ӔmKbmծ ey6kCqԖiZ>~kvM w'`AX8:sM"Y\;-fެO?б}Џ~fDhI hݤi&Z)Eb"=oL䘲=. XW[ʵХ5ctk ZE)>4GID1ͬ\S~\f*J5fcɓ"Rz'׳ ⦿ udnVw_P-.\\e>Ryo 6\ƳCAuaᆮ\Ap ƒ*x@!UQ,0[K>64+/YSGX'HrZɷ@Zri8S@kىMn\0@t=Z˚UjL9Bu׹*X;?x\j>&?4)sain(RLH 5gZ/]`BΡK#i ҺNSm)V*!u#7߿^4zӭz{b=]~Ycar&Y~1*mכ "*YIMC(h?Ityl\Re#^h]0 3*]= .DR] ﮊ̯@f7˂u+6\G cNQ[8-7D7r+LXs(H5$hqmrR 0fi[8b;V;Z]il61в8Г.GѤAoN><5V=&}Y)@x!7 ˵-td*aCbi&r|ۋٛRIvvs;{Xυj(V~EVɞUʾ3v.OkSwW<,qzrg&j[zv=7fv=剕g#:œB€BQNՇt+ɕ5kJ_Ң}|wY$mۡ"Qf?/1Z7D= C"- wo&y!z sq#z[V\Hե#дC.AtHON숟cxܐUc,ˁb{cͅm;rPppjkp IVnZ%5bV-3rהKtyQDHvL33)h&t9EQұhf-e"j7 Z,Dg761QRFTAd %D@6zJ 렖h.v[*r *Xn7B?(xjƍ76T;Ɲ>Ő!zS&ATjBνOw4vq~|QӼ iD$Sr{ -I9s&85jM,aF1"FejKH N,ۼT|;n9%(Xru6p˓.HҌdh\˃Ձ-|RH?¼[.2fܿɉ2/$[U`,H<<'R&tLz(K1fO\ZSސ4 9Շ4̢&x#(dQťƂV>jEIrF|lD`1@*5tLZ,g-ՎIv8oɌaBy񒆀IupS2hZTIB1ϢIfLPY`M$O.R^7a˕ڻaTP]ՠ>lmv ::c!*캖nm,!1yKE嘬7Uy1c^0O(SՆ Yopd6D5ѝsV ]ɾҲ$8D}7{wJ@>g+g+gyx5ZtXH18"7ʓq=mZ+XM|{Sf0݊.m+ ; rݭ43/1z9^{j{>8=»Y`S21jɍGdpH0Gs *>NvEwVسNzʪmJioUAORͤ5ݵH#:iܧ{$>%䗙q2r%H U&_V(k^J.WJꭳABP,8 ڨlˆs8+O9վlpBF,v-}(d)>JY *+p lwMljjU8zp,jAc&,TZǠ}[rET;Us* z'~8`W1H93<.9PdH5_o,J܂q R/ TZ XTI=FšaT"2'r:ʢ!WMڠE`V7R@l)К=O,xBʯ_d(dkYl@:祋3ʠzy@vGRg} IL6!g-EC$ 5c%VE=Pӯ_njʈS]0BY 1*+j+ߵ>DT=ܷr=i !2/E[pHAj(8JoVZ6JsZgY7?p&̛o).)+_n ֥%,+RiJi__<H5Pr5{^SΠ1v2:d.N5YDP1exUgYfƏx'_ϩ,5 P^uC-_r8#Iź.Jr+:ո@Md}J}eI ETwU׾ڮ3s@NiRλ ,~\#ĨC8 Cc=AJ"JƴTZd̓:- xe2dsʯ4PZhz^gRU6DJDΞ s2:-F'-^%qKa?şvȒy╙Oa5q1_ x0l|5G"od57WRRϗ39ay"H)Es\>>'SBdI3J BPb9ks9R~S+ߢ|\,@lgOwώ'/z;~uɄ!dn@]sr9T^uvy2/aYuMG!a- ?1#Ć?ТT"ߍҢB0NO?4@#(D5`[S ` 7:eF- F7D@hp)sXN &8bƨ"lP rP$P1S{,h0Dİ4Zls& n(%C̵@lʺ yL sTF%@ !T(#A94HC'w6r9sSZ B*pfy Oj AF iEavywW>Oɏ??rHۻK`ߝx vSΥYe6-?zww9OHCi<|{rO{JzR <˳uJ>XPFH~A?mk W?#r\Hŋ\Hŋjrq+#k⁋)BLocE:2<-ǂjV*n3"/*ˈ` #S`N.KTHhl!IT+VbԶP MuUI cEjk qy(`8 e/1A}.$F(\x/$H* hbucY*H+vOxS D@fnB90>丙0+,S9+ű$:};Tק}MKQK=8IkCk5rľ&Ʉj!$yʻі[Wpy({ _$bSM'IbVH%΁*9'\0"(Ma56[G |,ty{gv9HBTB;5)G2 .kC# #vS$poAym?-QV_xurgղmRY$s޹KA)e>ZY"h&0YPw\0EbF1)z% f!P#9xhbXZΏK@4nKqykФ{r7=sew6\i 'عwRplҴ<GhT!V2w$KӀ%Hj+Gzͪ]_rH]DsִْTB4RcaU[kJV4˖H <C8Krj ^M#XT/|ı桜`;DQ"QM`5a4i+bN">SR$h k~@XZs`Bihz)ћ,7dQ(ҢmM]Q`Ս&e-lkRI73VԶ"uE ;Q"fs;Q$ͪ;!8g[0Z6\QAj*bXL9v ꂲQF-1*kD` S̤F+pP`lD.D(tІTi-fcԞO]`aAHNtG "_q"pljXҥ}0f%<ö-/>҅-ym3q)W`0S0~[Gǃ/i< c{*g`k+g8W;YLw}?_/oV79qӇޟV|^?N` kY>)m1FX"j)̷4ɣW-v-5nxǘ!9˳s9fԽ`` ѷ|< Xǯ9ħ$k>/3G.:`'qؽVͱfѬ,WZ-b(K )peyz'růTN]ĦRo= DPT &@Exak@pTD^=&GIQSŵŚƳwagjWq~O/CӆwܧrQIe5/IeZ ?XyG khOnY^݁ =jDϞ`xͰB;#̉ # 6\q ?֭n ~p4%A}mo.$洡(΅-E><\WXæLuw?WE'SN,e:T#Riרie#:"A9V!85*ؓH3  :)s08?I"zPG J֘r+jqtaOݻ*a`Fnad6O_qrLM]%H=1|p+t^"vU6zţ9'ˁ6]4բb-!gw[f6>CMY͟f@NpexR1Yh7bFtR4\* ZbmBլƃbZ0 9\pwX\*B!u-/k1F)qѮͻs%@c=܊n# 2 vP|%Zd[l+Ag51bɀVC5û e``HSN<N1;wV~!`شd,2׼q QBz&z@K2nkk6SbbSϨj^NR1چkd51qLF1 Gh@gYfGh@r_}Y&#vWj @^ߍjipvD!F j z)92$5 8iJ\P'{ +xʈ?A35 ![j&*G"Y2?R0lS+^>)CL?>飮]+3]i?8}Т7}4Gx),f8x+c2lzN/S/zNU0`Jri~CG]U=Bi(5" %ePPGP*ĕW4qU{~4]pȬUVg\_q@*0i`TyGO2m0:Z*F\S>Wd8qU& T.=\/~JrV#G Hc%J)6 F:HXw8DO):"="2=]h&&AlfZuep5k Ky߯fWkM;|,uu7KzxOSmæ4~ßW(*ӶMr=pi-e0sfr{RMwq#IewC)<4=;ޙ2 Tm]Ve7*bU!w3^/yMM^cpv=M_nUlvs82c{sH>̳os !Sɨ0@)Q!( 6o|*Q mC@a2Y_ۇZQS;!Z3*Cj9dVL~El@nz tL3!~>|[ .(Jh2B%C-3%:a[' iLI3_ 糋b&/Yg /0;嗇<0#-Z3ݿ-6g˫j6Oo܍5h(ݍL~6+̌nk5:I`*Q%DHIX2bɯ(ppp$ ;: t\ בjIZXiJmDeY`Ga\`+#cd$BN%"qeq*X B&-q-y+ܗN9SCOӦ6M3 M["rɄ;iR $4mG HF3Ӵs(B즑8%2Wyy4m ラ*3MP&q:2876<箔vaC~lHc8nXf8ja D{&2nf]3]:mݷicK}!JkwEC?ȟ7Okc:oaWO?[=)f+dԻP%TqF*I"n3 ~{3Hə[3>b+|:6^lۨX za+噗3 4Z_ KzZ~ Mx }s[f 0_4iC ?4'bBæqí$'2aLdp|5m(˰b*-eLK2զUPbLVLW &)3-+Y@&#אy ̫F̼ՁCyņ"y طBHteoRR۝՛?,NIT3tѐ PJ=Rmi>eZJ=ݐjKq}D.I(/?=c?=cAR1$DL($*C0I jEyF CE 96޸kM. 6sVPN'TpEL%gL432e Py#/K ;] QԘ vyn1x<c~>B\E.8f浃ڟ¤j¤JfΜWѭ!i (5>;KAN찾6ʟF/2Cq ȝjdθgϒ̓U0*n/e Kٝ2IsOZd:α]g[=lL2%e:2D*|3TVRXn ǩfLZ虼E/!ǯr:&oUKs' V5m=[FJqzO$PJXvSϬSR˹[zvw E:= +Bu|AZQophCE-/ւ@h[&t%d(>@Q6^.QR^"6(FYK㾡Nu !"R)Ib˜bL$s :"b :X4(Mh> Pt#vw8Y L#lU܌B(jhjD-9%\ZakvW8y@)9jy$\lHkQ6e+|l???z3St Ŏպ}wY8nf눹Y>b'y୙Un0ېw-RC&"6}NVyt6_|;XY9+~Ƴy.]?2ۋr8΋rugȬ$=_Rߟ"и⾱Xl.KTzN}ۙKZh=nt6n=z-S:kZnm7E)&;1Dޝ_a{6|~pF߇˫w'g<:[7}{Vg붷?Y~<;5*]R[UOv@qԍҺoh,pKRtQgcӑiɇTӦ/쯾gWwc\atg3B}]FV@W1d>*ZʩCp7ܫQv :ծPTlV2yƐ/yljbW Ul٦8|,wt\٨Hźrp˧KDA juve7@ WvD-$ +1ı:H~|%lTwlol:(#T"Hu:G)WD e<2;ai1JjДn m^W0+]xԊ6K谆hsl^9^JjpdkG˄貞71[yx:;/# ]-`q Qqi` #"0i4hc_QmWeѢ(9(!q59T-hTTH"U&.[T1j\8&"5NhVXIssܾ@u<~Hُ"Ctܸvt؀ tTSSCk_#£ 8o񴝶_ 2̾ "Bǧa'nd3TjO{c, [ XIV_9Q!(s^st Y4lr܏;0]m-|S{f (|醪 y ODb\m}շk4򹤠K|r=YQ3 c1;uy>1`@$ (*X$dLDAEO}lP,-&v 1u.<57<9b ~@9<+.ތ CMض"a'g,ȳrJX="a:׏I4)u _Mrn7 a )Տg63^Ѝݢ>; | rY[x\'v (:[PO0Mj+iڛVו=;SCOx#|H{[fI6 W-_$LSx5 j^}VZ*VNA:1>e լ`ipvEA357,)fžf'IawӞ@ȏghгD(a֑/ ~[B(X Be(IA,fQPzf:\iټnk YnH+BД:@KeY"H$P4PAI2B0%m^\ry܋K*>-n|^b~}Zۊ6O2ZzGAdm_!XtbKG-?"GoJ}W_+^*5ZJ}[AWsąk氢U-lYs/ijsh uoEo~癣+B^63[= 2f6~F\vxږvf޺$io>u^󇖨)!9JH: HAa?h'T݋G-T/z=ëo9ף`qie{C5bB"eQK HPɒ3Vw3CD,b2x.VDJ̖SW~Zo6%T#y\V?~O3,C3<̲|q&a?~xwB Bfu:$I (: i|WXJICf0 ƄŜNd&YF:X4(1d~/%;1XO>v,RDRz}(\UQ"/Եv4`FGJ7_/d~?ɹ]Gv?usә|!Os8w@-e)LN!'z*#wlwQf—9?>+2{ qJ=[y \z?L KkZ1Qh+:$m|9e3UfXx H3E% Q$t u@cIsWoc6T1ZqhZ5'CEG5af=>]8WaqŘ$⪥8]}mM͏1zg{>r>v7Fkz1t̶L擓 _[,`ȑu<ozE>ZrA͇~[^A(sfr!eTOAܵjLΠ5+T^OgJ{8_˾n l`#eO1E)$%߷!Pd`Ė8ʖJ{nebLQka{QqK|@ض,`Tfվ#Г0*6W) .n+)KLm$4Ps^csJ% hUVE|c%N)AFsN03idyd(D :oUHȱbZt1R'3&gGSuUk"J\ tm:. U{Sf 8b *^lFxNi0 g7nR?ßcOXU-g)ݰ'~w9)3+BUvnn')\* 8k+Y{wWJ5 q>'7sjE9Q]w_৥&xrR Q̮RRuhhhrYt{R5ކh=Ӯ0{ T3wقJK!ނ~  ,.l#>/1Hnǣ1Hfo*X֩澥dmQ%اC?*Җc0.Ds[SoN:ORc Wfq$댱пeIpMe]f. _o_?} (4hšcnh"x'9" y5f K I -11!N KLaE,2AɅuq0K_|>L>7 4llz0̪t}(coAe|&93x2f6 A?ƗYW|w9J2\_ڶXөy{z) GY(NM[L Zxlws?%$^(fjŔoVXJ鄲%[jвQf~6s&A~~8oi`>_2BM)aMl(6JeDW}uB= Û{rY13Jk3B-Ri 젰кiRlN+$XN ::MqRv1hʛw&) f3:oO$ [L?x f9JA~uy'4e0K-+e8O$kip\(ڞ5kƺVeh$8n$.@IW=x\*鮖6TJNEP^:2368WhN'~z.Gz@өLNe7f| }y5,YCgÄdX"ͤ|xi huw =mhxchN@Y{S$ ןD R_b<Ťc sbdJu W`Rt)4EauJW?~ H,~zJЖǃiz4g4-_Y9-?oLw_u7ggN!9S9\#:ξ9K=5)_I}ς'<0%paihOra)+ ʀ`۳\KXgE5SCbER)as-h@Zs!UӄYTx^l5%R8v6kY}ڶ]SMiyiyM0Mr>*[** [)E s %+=0Xoe("G-(P]Pss4se,n~̗ \aTNUcLy; û0fοLQLu7_ xbN\X\ѝYe*e^߇_fϏavb|`Ftji&]jVƍ\ A~)σbIOpTޖ/3*IV"`0O*2^d{gpl{ӿ'wG޳斈/;mM+ENkZpͤ,U'ycυr#[,)I p괝]-Ԇo(UE2£N{<30֝ \zLyӴt4\=>1J5J=*j{z]0z#5J%-@M>lɨ{SOl}Wb1u")Nf|Vϕ:fi:z\NFEA"`1*Mxj9jS%E:[,;ώjՅDDB̄fg9M/v;2[>n}l6a5_h%]-Q;WB`-P6x+Ej{k4n=( mC%/7[NG眸-nms#yZͫLTgZ^a^)sύ.CzLs9'\F/>4:M%F!Y YVZHi}Q~l,X(4vl_r,~傿WeDaWLjamTooo)EEsIpR'*^o7z{`TP2Va>.5ʘ/6է.W0!,&*:Y͋:ЗUS:,ZmZ<$BUd"?/"#V,Z\A!2ワ+H1lċ9Md3^ɮ7'``Σ_jcsRWg/eg_Ά~f]ʚ}Va6+D H|ęe0ȼ``6\Vk]ػ<#8_!QB93BzrI8{V GBȵGxuha ^ m$bQkB-'DN"ܪ {Cޠ7%|yBΕ <8<0ñq*AFp4]Ԕq4K B`,! i H- 5(xET3Pb٦aXIzӽfX{.`JyQi,mI a ʮTFͼT\X%@rbxLjČJ w5уV#RV=O= ,1S)/,8Qq^Wq\B(zyD0` " <"0gzr1Vbך9wW_/2̰qZ=ȝW(' Up[pr2}hAD>JM pH"#ꀙCmd;Rc+A [Q#,!޹Aԇ7N=0$>h쭐<҃ߎUh+)=^5R3,J X)X$!2̃AB%,!XJ`^@%=Ɗ,%~8NJ/ }7v7iYȷq/Slaw 42{:<5M0VH<|}sYaO]ݼY_~Dx&$o#0!t>SNߌ"ݲ߹1O.- V|vv_ " gz}6cwKRΨBXsҤԬ^VP锘s- HlA*  N Zc&q}-st w)| 3eߙ^ ӭ)g_o_ie:Gdj\NԿ{.<[a$]*| EwQ7{s NNL[,6XD>|D8ELBNB5Äpx=yB@v1IJaSf?LFɘ2_f=_y9PXr *oSw~ۙNŰ}DJ't~MkD'|ws{Q0 q8<љmLɫB~ӦcDUI$@_Cc4}^_#/(`iW€ * *")gKLu $߆vVjp3 R<͎RNF@m ǁ09ϜxJ~H˕݉w(+ mt<}v2=9ܑb*K%%Q}ʷѬ Yq X+tQ*XLLpoUWo@o_#5(@u.JbÇ ڃjj,Y= vX -Bxڡ@K)l67Z?(HTN"jXd-){3cR~褮~ +L@PMjxPo㊬0lcd+OaT\8 rp}+Op& dS賔J5fNqN $ p jB 6; aS-=~:I%wmKFW|5LD?uH!뇵.iJ}[Gc}"Y,.C zXJJdyN'JW$9nVLaY/qMˢbW#$aV >/ɗ S$0 J( %KTkR3.Q#mID?Gto{ pA=;9!Z~CVh4XOvgL!&PK`vt>;7em/_lɏ"[˵(%\ӓil+/T߅?i.>4]nUd>]t,3GHr\{˼h²byz|๸zٿ٭9ܲŮ 3B)x3JV{npȰ5HXڛVH\нF0 oΉgOk?p(D6|sgP*7Wn.[ܽ>5}XȪhvͫaUt'v3De.@)-8ҝN* rt9Ѝ4Ѥ,l$^pILJ"pbQ+z^Ͷ贇rGeN{T.8ux; t0]~yb/3z^VWN%0Bkorzdqu,6&Q̣5He6">P"Q9џfZl]ÆCW.s轼}:Yo7n!*0C[H{ںuζ,h<.'nByl).uK8vӢًf V/ @moCs${c'.6yTkB C[z BVdgmS3vo TZ@ xV`Cjx=l؜adr4K]!d~V:#̵?~ӕDa 7o ץ`ȏw:5!i8{N#_'*!YTbf@& E)q%y=4,o=#|dyqQ74dNh1 N8`=dJmW}6֋LǨw>jh\(-5"$&PrB$#s\ ichLew3=l1Bᕯ pQ ,K%( p9*O4H99 .K`sj>EK)Q]䅜I)`ؒkyeH `!y9yIɴ.i$Z) |J>$լ-$.Ky:en!˩PJ*%"e%3|71)ƴ zI†3Y 3zlEshu9/',Tw7{y73s{]-LYգeysqEO7\Ce'?O߽}>]0~6wD A8YlGdoǣr:?/'/wwӇc[ɕTP[t@<.\o a<#!«_#% cx%}ow_*Q&G ";:~liCU|>@g11GNw<6\tndUTUeXdfx$`K76XM=XZ~I "ƙiC$+ r+&1^Mkeܝ!fY 7aSѧR{4nsM>~4)̖n&ꞣ[]hZ4_m8tٝ[Cew.߰cxb!q@$F2OI+Zf)f`29Ke" nB+)U JWoem%߼E$2ɟ>EVm8'4W-{!jOa2i.C5k!]6*=X7ᰖ)Fs +F갓.'MjuGXmbV_Vո`L( t%-r q ]_$8r¤DQ_i1M@ 5AhA+!f23?0v#vbAs]5'POw\^ߪq|KpS'$=DZ"{$q7hyq쯗*{ ]g1=rw eUH wS}cٚl6{ȑ s>X{[~˔[ztqk[~lZk?rVkju~KESlϯ/TMѽxCLp١ZPKkiܶ1Z?\)ў>>)mF }B/FAW~~7OqoQMc"U@ tIEsn:|F܋gZNҘktL Hiك!bND\CFMo\Xؽ2[mNwj.NE ?*BՋn^F0x!_/#Al_2lv K}y}9E\>m= x%;š;TK@vۋ۟RpX6UCof$]RU͙sK(LF>Sp8xON4Y(ڮ'Y8ڮmdNݎ^FxL +j^'lr7&[Ol&{>x{ͭ ;wѢv^aY:dJ`l^g:-̪9+ޗ⯾n]<%DH^H[s's2v\Eu+A\fx| gbf^߼ Z%Y_U7Lϳ4#sNOMT!PFS($c 3ٖzr7wgdbFKrJ5i2SXS8E^*P) ˌgЮ1:i)A4Aj=/޻g0.*4H$,a$yf,%06H q)2#fԐg2YsB\B+\d &i:ߦ }eb&??dgsIi{\þ[5aS<4AfCfb昕[@\7uO; WM(1M%sNSeB!@aT*(HH6 C-^l-67Z>}z=p6dP׾tUm\@:!T&%5.:Ѹ.OK V{[dAU:uF}ZiTy.i9e,eHK%Dщ݅W$}d]|&?Z_42}(9t" _AM.N"Z]q,\`iT|gi)n8/EU G\-fELU:L\YlM7?gӪL䳽6jf{ qa|r}VD*_&Vp-}>yRX>UYۛradܯCɜ|2mpt}B \\ uBg+eff| @Ne9vscMNBQicLީ sR@l$IJy~ʨs"b*>|XSIq,"TZ%rF)?._ɅpsRܟg!\)eD*x'ϳL;z*t3uېZ-PҹWe[DukiLu,_r&XY?`^T繖z:FsGzWǀWphbڼWNj흦`C \,$ *;or䀟)=7j&>O!ǿY$ȪZ! dN{ͺvVձR EvY)}~ n¶w僚OzVN$Hrf9{P5',*م9z#:Tǃ£"4yQw8gwy5~o'Ӱ툯`iGۃ(4En,q$ ˎM71]-msr*sˏw$(``yzM>2 b֞P4>kcBŇre o>\+\qb1=d3NSFfGЛ9:_ =vo>|m h Wn5Up!6?w؋:ͫ B_͓Oz]^;;{o(Aq13\~mX.v.L.6)s)<jrxa.14&$)P2 `@< ApKI lgM!DʓOOzgݛTG;zo]_^·yww.n|sweEG7~l7cl3[!q 9#2>=BeNރ=I=aNYCLe`Έ+5')9$(C_s䍩hnE}{m>~rwcG}o VLpl4yt9s.Վm.7yвt<nx7\=u-Pcu뇿..o!7Wܽm%{ː' G@%wcx;H+NyOĦ'xoN`B $'09 ~Yi !vð('iח77_+R }ϯz@YYBbZl-%Z?m8 E$irz 2'2ڙ)hLgs;nҔ4hn!iKi1dɦ)mvx>)qF6b0!N-p\Q5(%kU//hď qp,_Srv nO,@8|9>J4e1@pYe+pF8JNMo߾bat.=~e: 9^4w5e<lv]n#ס.1\ `-]nx';r ~1Xg p;t ۘn7x9ܘBl&S ae(3hi0&foj){,PeTf:hh ڛ`[`;X;`.`s{H'`3qM0 O, ǥf @?4 ?P\ޒ)%0E87c0ci!YKeRU@IԱ- MVYoj){k &8yq@Z2#s@Q+17S[ǧyL%0̓Ffa3 օ:OV;P^n|%HU&zγ5Ntj^TZ(dT'2Tz2xɎZdtge5*Y8xmAuv?>,/ʧm3e{r6gw TRZ4E\w[SPU+W&KM ΠqԻ\HZ%+$ɝu+W_]D?F],Uv‘{fk-ӎw>FYCr.6bH= zyG_'4*B-v-.%J<#mЃ)8|3$v.k2TU*})[I`k^Ɨfl~&5ָ5.=4N1y"q[q|.7yW sCTL.ZLk_bn@䩘.URW*.HDlw)skoNcTc;<`s<7\,zlaYy(00 5 *p2_)='f+DK'5̉W Jyi%e٬)Sܳp?)^*}z*yJъSn/J`Rj!QM+ߵM.-aD;FR{)K4xUVND2Nx_ Y<ɘU7NQN #gTØP\XV92 PY+.5۪ThMe;m*>Sq8d;mٶrR>9ePʙݕk }X"韝`̬)fۧN?EbO^|s`ٶl!X22-L1t$]VlaSgLcY*"`G2& avD |K,T6l $%t.SjJJQB%*cH5mID5IҘe 0NDىsXw$S6T@YF/6|WffE904>kAvrNqd#;`E2 \e_Ίy}dVpeԌ\NETZ"ɩHD˩<{|\'Bry'!YP!\IOw)Hz׷S6ǜXGW{68@?OϷ-*7,Af{)\*g2Vs1SX_ IQdd4bThhTeVȃABhS*D~w$#ݍ܈-׊ q^E B=m3U U ҄#_OA>,<>՚:E$&Z .\lUnynEkY+qwÏ6EYU{OE^ls`UٜX^FIɭɁ;fw`MêM7D0SQAҾGpfҾ'Lմ[H9gDJ֤aT2{khc>"E.]; k TUXU3UG9BUMrЕ8m%|ݑoPAD+yf҈ ~>i S j뜌v"G\+p_|g^f>Bv9}{<[Lhٲʉ0v pA} Fp6,LCUv^;h qgS|P$Mmm?]᠍|_8M&.~S,27߮>ش=?jH< ۵#hWvj@58H\N'LhoM\Kނ{dGAd:_` =vo>|m(L<~'ms}ڎӼ zO#ĉ'=zJD&./ޝ޽79$%0\pC\L `.ȅPA0s\u B:^0xAr>ј<mz=+_ݻTuK*՗\aXwW?#/{s/ /`YG9pĿ2¦{Oktɵ_"?*3S57{#CނonNӻ뫣Tֳ77b0ХMb pʎF7c7| ycG}o ;rc?O%ab8%hsϱGO2|hЬ%ZC{8}'wVWC=N7w<nx>jMkc( <^ET{J ɹ7 $IήGn"\AB¸w\-{+gŀB!W>Ia؍A \_A|IhUO2,).´IJ"͗8 ͠/x"#i11+ \Nիt7;aج߬@ϊػ&7n$WzlN@zؚ]9|сSL$[7AųQ䚶,Y"*e" $yuvzԅ~ߛgwo[1)!An9_&uR^y(>oCPJ|yTM %g++h)rJ9HRBePzOyx-2 )'x+pUp+9xE2~LZ{V91+t%sA<W)S'E W!L%OX3 O,<o4$\m++b8an|_0|\Zf _'9қ kiE܂}rosSB`(٘"w 1"-p&<qG K aռ}XL5-8fK2 &EZbYQ)IG|r5Ey'A;xr1ŗ4bˍ.nJrMí y4d8-UiEOK[p-nC?#Sw{N핶gJnìYǗ@p5t&Sz7 {C?u #s.#?qh%Kbó|@..뚅X.VFn~zp^n&c;tQIMstf2Q@jvϣ=p<1ھ!2Oz2&\I bAxt^݃39vm sT UGu3ƭ?O`j[z>} Q AY(ζ 8@(!qJWkZ.ʗk! f:U;TyDF><[*ʀywT=-}*wo8 _LA|sgQ窅>~mSHLJJxë$HL:0L<5C >#/?e=dkQ"hmn=ޣ[?e΂W1X rnf:%?X1S2l"^Gd*xvA)+kQd)S%4TVs'H6g-|~1ZYǓ ]Z*1ª8"xwT%Ҋ WrǮLkCK#\P%zpi)MWnE҈gHa?n7¼|a0a{8:a tb\T0:{h@u1PŞn,dw.ܦOdtՀJt]uIEؽt]gx!K%pa,:,Q }?_B&׮Kj[=6@C<_neS)n:Vj_\!ʤ nF=^A3<ǩG@HV$>At 6YB{фoWӦ`Gw]Jٓ[ݧ4zwΎ`K3ci6|G}&qNqro_prv:P;}>֚qF$$Ey!p|QI9q꼦(1ꍢzD e_ i݅wsTʰbko8aVI68J|o?kLgIʭVH)YJ'UDxHD=inEdDavjit-sZ]߅8.ʸm"Vgq>P825:gl⣯nѿF;ߵ\/Z4eo.-hCJs4~soŴnUtN^:eD4(x>@ża!%%h%1!QduL N9Z.iL"*F.&Z' A:9' E8f qJ |t1Up̲a.Yŝ2gQZџ3^x8g7=M߁ {͟(Qꂃ᠈ЯF7vy%Ŷ'E85re`TvWxGs'Ӵly|_qv>+-?o82@ ⚄*u[<2 Z bԧ]h2Jf #3FnjXM.aZlY_MKfZ+9o'U>TRG 7}uw# swF@C)y|IM/ׯ~]|asf߉xQϾ PЇğDnQgg; ImWϻK܀Bnюm6v؄<-jn#{R/y{"uqB~|z?K+"}|nMhC'N;(XVd~l4z: WlMƳ.rj(:i ><˽r2sivq h%]!0 hFT(A{\{x!$K֬ :M)XC(D2$%lˏs6q/)b~ &eñpOeW'?ջZ;Mg?UߏU~}VmXݷauWw-fU{l;r0ib} *ludcV_Oo K3:PdkݬKx aWBk6+U֙'π Ar?Kab+ oko+1|N̜ϕFVc h<(_`o@%SƮ հCN8&BX-"x-3TL$M"9jYM-k/:T<]XK&f[ -IZb$&qBzw8'DQ& uXrbJ#c5/oc7yz#_&c;~-6=˗(Kkȕ_(]itkHyQH=Hr_q8hVs5-/>C+$k 8;*Ͼex I*[on&j#{)C{=-mʻ o)֪ o6˷jbD* .Hsm-&91`)aDS#ALR1iP9hr"Iko( mn5,rxt\,&Gר4QX<8z,Vfnc'ާsl?\W f)^9^_|ODJ;v^oo11[HbPkP咔* 1qrB$))Q id,B9b=R)/ڹйKpqAEq8HJ:GFQ qaFm5m^ @GH4`u gV)vTKXއQ-r{Xf[ 9pxgYWɞÇ2$}e&ӣ ?̞BfHƇ s YdTϵ1[;y~|Ė%|GFyp7GdMXwi,)u8 'Y_5A,A=s2U/qptLz2Q w~c<f$䍋h=}nu1諸ι*Ugڭ~BVvkBB޸֑)ҷj/eob%SW90 =H(YVp[|3`O/}✗й軛dzѩDX6o1"ovT]!&wY hm,<,V&T@dxso}Y<\B7߃JO8[VgdE]ih3+~嬀V,ۏ:D.n-׹ljZ+2/͜6h.#7}~:ق#I+UmՑtk0 Gթ%0@0ob$az $ )k)Mq|dģ +VZ@W(0d zaSbT?p1U ƍ^yq4O;L+8g]^s* q %XrzV c2(Mn8[NJ(*"XB=-`3UUϯVYa<\FeujǍ 0pF$M0Xa%:%4 r&j D sPQlPG<'և19|w#QhkFYr шDuy0gCj-=,CͺʥNQEe-eqA܀zC)*b>VׂP,5fZ=eñA3A612+(+%ޏnBe/2裺/z?ZbtRUajd9D}71/>_|Xz GɁҮ! )11IJ1 uv7 |,y=g;W+iNa#Lxq S6pVf-j+ILBKoQ8 Ta$lqµѽ Hu+ 6ӅG2DRՔ"B#I!W=߃*㷆<ܪiWh[uӇYfyklHO{{ljݮG@5BZQ{Ŏza{w/ dr;z 뢁iil  u?zgjVxF&gh t^>?MG%S(&F`͞&d_ţ;g`18Fa*e=a_(e;g1)h\!,ġ}KYC;[ ]P p|ϒ"?̦i o_ʊ~J~8ھ8&-ZNSo&ouԞYaF/jߌ9z߼c#(ř[k2+0Ch娸Zx]tN1I-3PQ)bJjV6+(T&n. CutȎGiҟ曨Nͩڳ39>䒴^!my ͊$HUjqR*[H6(DY@gL3\G c)UPNa1I46bMb(9! % |,Uu]鶯ьh8岡\Js8c*77{JDϷ>æxVtۦxobp\ka.&6>,!~ Lͧ4WPO [3lv#q֨  t1%ojs8BM/fV .G{Y-j E: @j*AYP-HUN$%7X]rH Ƈ}nC041b=6ԫqJ P88܅l 9 5*sL)$o4atCEA6ȅj#Q\$ (ōC#{WHQ*yǠ$}m( _H$_5b+ћ7|+A/)[Q?[JĈdV\Ѷ 3k59[?ZvT [8WE7hjIYۿ}SMP?0=!,{Boq,ߦ\ b9E/u/ߢS+Ծ@0F >XXo XsE0Lb6`n(v3C ąsfXnj`+NJ۽FՐD 9g 2%Cq qD(';MFo7c㬯AH- JAs쩼ʁ"eM*=.L\(ɨܭc+iX$ClLOE%,i4HQGo o{njH>N4݄=V)Vz'GPK2 W!qH H+ﰃS|5$jP/`l18_<Uױ԰l' % fL-Kװ"W #&R( +8cJ1IP)OxBðVN9;G@y&Iڃ4BzX orJk+%IGD"D=ʇeZCT[Xr2hf*l]B`jbѐ厃cG4#Aa*ȴMbC Iܙz؉\",}JT-_FW\1fR{xIFBMWp95crw.q VBz;e#!.O%Jr{q =T"ۓ'Z2^cLoѦBK(KkE{b.l\_lQvk' '[PYP\ZCqF(V;/Uͽxkx@JrKTc:˻3PObkwǥFks#쮿6Ftlt ,&8ڔ^R ,H3q5c*<pڷ:{KDNxh`˭C/nP!84dEvp>xbޛ;eQv<̰Jdr}e;ojgu]wm~ٳ@(b`p2srd// /celɑdd[-2-RSLf|﫮.VW/7OԸ 3qyKuaTɘB9%,"ѡN"4l,*F6$druysZ2*P!bSERRDiSY= % %blHSDܢK '7")DVSƫD)2eG()%rz\|NR|5>>EL'L_o"gWd:Hh-`6GKk&=@,/sBj#D('R8L4 ͎&%LDn avceaybk9VE'_X`"Pd[s@lk&C;9p9PYk.)n bn4l֢CMڱ6_!L԰r@'{ObQ]/go??uz AiZ~3ПWB }\"4f6mRAxfqϛRƲiD qz$v}cY[TxюeBSϵEX5hmަ|4:+NZsn幼g\DykZʽ ֏koc}Yx.u,11Hc #uj)"t(ɩKj򅼌 [Ec ,H%CjZ E +J`r~*L*$Z-R oM\HV#<[hK.j2Y|mu*W\O7..(\ }&YThr5vWr[qa*sRSnD+$'JN[r$gh%P~k{9lQc/XQpC-2X`~GbG8Sk1~ #W:fP k`NP{=)n}+Jl0xC|ŽLekjćb:N_#s4DXJ& UAZՃ ĈB{H65B59/pʫW&9ޔaâw]^c")1xajDzȼzEr{;T1֋}눎W^{N1߇y>-K&(XI[q)w5v<MsѼ2Ǚ'6RD$p/ϞJ L[r#4RBۜgy;gbraXVBw&goYx=ØЍ3q*ϙle&=2j</(n1THUtm 1j7GQ+ºeHێx0Fr2߻ǑqaRa*Q鏳Ah\]V 6Gu[I٫U`;T\P CK&tn)ܿun|Ƒl!bҴuhvE(tn )q*ㇿjx'3sI?wpu5NQ-^BP2 |"k2U,1Kn2HIK4eY&şu SVJk{2~Z|o|6Ç2??W<(xGy}S.e.ێ;RL\ ^6۞7@{opY x2cI[2M?OA֑,cL-i2V:fTΰ^+Ji Ru}ܣzG +Aڠr8IFX[}W`p$3JvVE13,B ,p3nctBSƸoRS;;nیYHdUR <3iNV]C>̬Pw @f}aEHE_+Sq"i:_թ|+ Ș"rKsk!shZ)Vhe*UB:,dyXx9%^)2D;] -`/lu4_ ĿڝTh6+ic}i߉JI7f!fD;|j,X#Qyx(=.Z&:"F^XkE ^k8E()b{g/wDQI_(9O$?5ϣ~ {\(=cͮ ꝑen + r_]fdsI~YLX'M|쭶B`BFHBQEYDŒa/ #]xYbm~0(S``1RQD9cjDZ9 z O. 7habmOӤ`ZkVx+ zUW*l0llW)v H}gzjwk,b'BS0iZbdq"e>fYL0mzJ Эko,2V*@w } IhL\,$2$K!)`J+!#,ȩJf䒐*`8z&~uu|%@j8s*-0+àǭRXky ;pO`TǴ =f_A{\lزkYu-%~%k8y&4a[v Tg{ Q,Q@XX$"_7 )Нu17g*RRǤuK&Oɻ.K/ǍRZ3{Vt.ܖ)fo[ bНg3UMcKαc 1C{¼ Q $kт_#Cx40hWR%8ФC։\r'\kZ@FR=qx8b)* KgތAq;!gMĈQ,&"*e2 jso-ж)xCZ D9?Ϫ"]ma@ß{ OT,?\{P3WqyKzF;Ķ X)eK$@[Ym[Q<]PxG;eG\WIm[Q֨.S%`=+=:'1j[,W I.ٶlWV_~ߔKGm;EcF%ƷjQ~ߚbmi1@KJCy0%k4Q2@Cgo~V@ݝB.I!m欬f^ 4f_JlFٸf"ƟӎX =0o$?.k^gpd[(k\Hg9tf#"LɅEU YL7\^mtm9T2\~V:id.3IXkC߁Npځ nG6h0kɭhqalևGdQԆ# =x7Gsrg2ɸy&8><,]Y](?nZ%OI:}~8J;QhҪ]E6eY`|FA!0 >{o=4%Ô7p מWg?==i7fW-oYPH8"[pBxZ'8n gCec޵~q4XµTWOy/As5~F1}+t^}ǝmn>y׿A &H36iBFch~@ͳ@/'ۉ4= }Wop03yYǴ/;'dzW\,pAU{_n@+ƽv.g3uХ9 _ {'zm ܠg4ŋ4e7M5h9z !ԫ} DlZt1F`0;"ՎGqӶ/=E AR \;pM0EO:0>?n͛@1/`|cI7~&Qi+`K#=>7Kxfz Wn=+0 6ٯi>\< Py}bF44䦜MO`ķӖ}۵~ݻc'z2;.|=_;eeўxfB*4w>m=yB,όU_y=7 gSe<knK8c;s;*L.gy 4s) 6 {7ҏ~~ޘLzMmՁcYvwX'eRBT'(U*X+RBA@ ц'ArfSH0V\rXƫVUel삅yd-SM%.*l-24!()B'4Epd1njVLPNN9$%#o ǘ%Aw4\@8і GP d%@yDL?IbEqlM4zH"%74tHz.Az UW[DFuۻ8/e6Ri8`KCҠK`PZ[ C[U9Tk'c)(y'i]{y6,.L4˜ 7Va/<r ZOϿ /O.HGdn AEhf/ 3_==HcFp,? ٟ=_ԲX]>6x#"׌ݍbL {:3:qC!~דH#U Yϓ)ab6j/4(0;: W/G6tbN(a7.?X'!-g!,bH vIhByW.e{Ȃ9[[]y)8C"zWMh*p?Qe8j٨4dJpH0Ao#A#c˄PcsH9I5Dڜo%$ 6m[~ʓ*1kl4EGv6uHܱHvR6m%YF11gIQ,#,Ơ]ho1 '>u`o4+?5!F1zLڊ{fh:K=2<*{l4߸ z;~i6(fX,>=O"hgoBШ {*>+ni GG{.1,0=b `"'W knfY!u [(A$zND̑Q!E3pS# DI 3q\,;x`r}^`+%gKP˅4Vx 1zZHZNUD$0~R8.ԑ% ~yp$aBID @K9%JlbV #AaX;`0cWR[=)@$"U.1*셾͡B/X)/lebO#Bk+RJ Э[iVC;pBܮsMaU &M)\{SD7Ahͽ)c0n(+M=Y,FW}u]fzXJHZv[BmR9W;Oڎxr {EAyă-{X~z_hg'$] wpeM>|_>4[\(}۲(;@ax}MLtELoHgoT 2"1_ͿkeB\>𘯩'(Y|ۛc,L'9 מ5=Upb§pv}Aïfw`Q/"q,*Tv4\>钂j[M7mB]5=y'R(];V4Uǘur+yƸ [@puׅn}&8=b0LMM\3|Bv\lYkv_ yx.K/ղxPXmΓԋ2wfWJr KL\;Y9Q(֩;;&lc͡ ?՝\uGJu l!P›t-j?No+/fāRtHUPH =okO49f{o{EO!Iѧ >Sg 'Z &!֚sΩU&hPMU? ;xd']^|Z.5>[<*cX7*VN7׭S<,aћg edT8Jo=#mN5H &2gq˲TXOb R1o6C$>`>pF9Af!pǩ2w,b%Ao#dViAzP[4{y7 *1 g> @ |6Ӛ4hBGY/ :ŴzeWpBQ7ŸI)BPdU#RؾddHAhUxچKP*L43,e<B9p4(KUj}nhEdwws('=BsGK&8;9.!K̭s86U!Z k" 惢97DH#y ݂FAi-/t=PHH%V\ҲS#nw #.|q^y+B4i_AMikN n3U{=[4δL ª of!m.`A{HWNgGuB-ᪿmֈ*g6 ,@A&Y'@)"XYgM).EQ'|33ZK@K :n_%t=fϊ&Rѳp6j@NCs9B+#Nß]L-w1*{JK N$\Ș 'd TZ͖xʦ ٗXQd]2≢/g; (鞃vO 8sJ1ƖfUmݏλAj}8̪)JSl8 ʭ~>(sa0욯K4Y?7gb{p8+81А#_+[rE|*[r5g4784ٟ"bLH2酰 |3W{[?'w>%m u6܋e#yr%9l 5jf,?ҾjP[`̘+oְ/lUJኸB*=:\() VpkM3@ Gf: 8NtܚcSsZ12W IOKegaTqH ϚӇ/Gt7Rڋ;^LW_Ǟ^Wiݗ;G "[]B$-"QEyH&H&M0Q+7:/xY} 8 K2V8jdg.)\% Z9Ki@}Hz3DYTyŴzR$Jγ?ڠʼn ^ցOGi Z´V4 LLZ4˔S@8!Rk4hK%,Ģ,փh;S1c TQyƣP29tR;T 3ÙȂ,w!!V:Y lE|OOqo/p Dn8{{ٮH=955\P)nQj=KsKR[O@ 2 \wspՌbE*17?zD]1#W'f3(j=PkŘRzJ cx}V9n%D#$*PXf]L{ B+GpKnvL *06 =TJS!0wɭu_dr0{͗<*meF/n6=E+dܸi%2%Ftd6If3FLcѳFoOv bzö^JnV%\8X:`UXDސvXCN.yǨWbz47`{t!|[2 >a\ѸD?]]yŇϓ/Ke3?OolA!kRD-޺U;R MO T)_ON`]1 Z6K |=xIˎv͐tL vK*xtp dXEG@ CSh(?D2ˆ{8M}6qpѿZ#58A{C^1~m޴}~n: ۔zÄqar].={Z+K];FOLAF=x;$[A;R(IS^JԎ*p-*.V*l`Fj!OV^k Nq/p E4M)΃N)MuؼזA9ї[\Jˢz Nl`TZ,#!53T0kR2,KmD&׾`i#&L\ "+/YlP) i0DHqez[-Ն\YZKuXRR]1be8m@zp-9ʪT3ؕ6Kp]r@pVу|U+O)[R WKUв,LG0Ȥs[{g?/Ǯdj!7< Fnȡn77[AsVSOv>b݇_?)g[܄Ճ]B '"đh#9Q<\.T,;yOgttӬr2'q24dÙŻhԙް40\d[UM:?*f2IT/(˸OVv<;V qu[y8o#)"ɀ]B;B;BJ\lB8xy:FL.NhrΔn-՘LyӯvkWT/_hE"a΂((@%e ZZ&t:k&Q/@A%KپV b qĢX5\a2 ` '}Se 4i24ՙ.xյDhE ~ɹ֪])%Rѥ,\v05'zN %`"V\; l -2 0p#GAlcLSAZ^GǃK_kGUc,FsZZI[@0gF灧t(87g.1YGpz3Ilgwt4MFP,g1r[S8lw(OL0~dz=PȊsˆwx$*f3!~+gz% hL;uXnZ[)9S&xF@io޵v+hvkCB"zLqƘ꧿VKDa ԝ>ݙԝ3@-0FH.DZ#6!"8 sݞkLn#XOӇ'_3Y,9RS/fUUp?-)TM?ۛp7=Oa>]&#\W' Їp lj~#y <;eg<3/gpnΌRvp4uP]sPg+yKoųټ[eBa2$5!Ibyqa\dkUVo]| =G\h%&+՗dE&Bm[Іt o}xsFcK.GF{{6׏\o|TǦsEH?cL\ F:ݘy s09 |\~twcm/1 f]#RF]}?ލJ*%C%;b|eũ+TLyJדRp \3k0|7iWao2yjGv&W`sF(Ĩ8س/ ;059vX\VL-w&qY|d&ZbH #]DlEɒmNXtvfэZ%o2YEXm6k09׌Oweo;M=4 "׉hf L)`MVD-Fn;w=X1s:NCӓ#[+t;z~9k&zY򶶛%OԾkbVw "z{"U}l 7ai]a{~ʝCT򬍞U S_{Ovލd؄ "Թ{흑hh%n:iX8[wf-^7D k"& ά iI&|nNgQɽz,:GaA)ccո[+M8ݦrߙj:`y;@MȕV#%ǧvuYߊ[+'Ta|!C#A1"DYi5VȍQ!,njV/(6@9)~Y@œJqK.qmđ%n)j^&3fn gy|757)X 7K9:MWFa&LBL"E)%) O)i3]uw>cBK.#gFBNjCu3X Ѩ>ihxSE{kWƓʯ hLa:|wvR Bb#:hݎw ¼2a-Ss}}x VyO u/jożvo.V~0ƝW~++%4*>0as2ӫ&߆5UX Z$oW{b*lo;֝LnVj2v3$O(:LTm(VX! Yb1׭ gI`f繐*/]^Q%UDڅ6L!W? {d}Iel>c+"蘜2%cF%^&y1z䌰 Fyq&=A!Hpc cJK,ARi8 Bۈ Z3+U$9eH=9* 0(5jpݬfW׿V͉tRrf(9+yG {9N: {jd>l~R/c›ɕ7 V1ދFMB~U-— 8[-m (zIz LPSҞu46KO?-_iOˊͿ5_IύD*n7l77R| s`w5c1'w6cl0D.ZӘ BMmENX-3AMI`i]cԘMTOscdS 4'H68 [ t!:Q[іqE Jjv.И {ՓjoԘŜlKcrYwڔhg RC8$#Axt FA X YiT:Tbg,# If#B{ڼzSeqqSqF05tA5#d`\+F:?ϓ] OӇujBSl8:rhY Q.WTm綱$LBhHD&gi5Eg@E #PK熀c`ᤏhQ%W.(8, cgrG;#r%Tp@M3N&a~%ebR.qe1E lٯ2 w֒ھ 0v0tT PǎWmZ$ͼy+޺m唛AQu=E=}^.Smi^%NY2(5pycfz##LE.F /@P*_)n*m"D9$)TD]y6?5]xJ:N+O΂R27v Y(?x;8m$tV3\܌aDD>$$iJoJ@u.ּNRðc3'X! D éNGԷΚ^L^t0Zt ;f H: d1[S9}܏u/o){ p~JlL l`<8XC&:MD 2\[ a,DbRSH{ D ĻVmѫgbXT3oo^s5 K#YǑPK9]|V?* )!mEC>fB8ݞdn! >]~a飽)dKsOJKB9BüJdB '0r,RB{x1ċִpAkj}<=ukذ>0 MWgB> ?,![q FW{-tUuu؈8" HoFޓlnsiO^eP˝cWo]Rkn̕Tue߿8V9}KKM1_ |;dG {}؟8qTsBdR+1cgǨnn֧Y w=r b|*q^y}J`iv^q,/3'%Jj=TY:jUs]r/^t4Tk?RJeSɐQy B?__ v#Tӗ9t-JWɻ<*%CL ti="ݲA]&һԍiHȱ=ɼPZog#\"@S󟼙z{^Etyo[gh]_gmWw=?W;ϖ_sylA$#ot1wkՃ+p4JI{_peN]ntpEnaW)| 0.{TqT!*Ԩ4t,ɡĘ΍V_9>ǏSZ"6wx>~o~-Ąw]%&=` Y1 GzGbpzOD'њO72 2ۻz^AÒ2gSW'N*]hR>IwW1QuB\~>n{ŎqgTfRJB8=BZcfoǣ:OY tm.'HTd}z{g//\և9P?(m-jeeIu Bcq2W`a⃴^`,x7g79ٚޏI9ԥ~ r6Iw֥~~$qWY{ 9xxcS,+J rI:VNmܓWJR!+fs Ġm6+D!榯3-iŧԩ to=M?򃷔JƤ =)9Dl sj29BҝŸP&Ѧ^Bdͦu>wI0wS+2qۉ DCB%2s~Rh !evh%Q @jPhqKx.-@'>  [Oq iKAR$>DRM4 \:@y%Dvst"@cys6;&27W _{f4{.l%;/0sP;~iF^l*dQ 9U rd=)rƨ%U]A* r#Sl,dZEm"v\]f&gee1IVcԷC^-ddžMkoo(}|Ny( hAPi#,hEP Nff#CzFO R[%fD Hᥨ ^TuDmٌuU(B/cɤDhVA$"λ`&n+",uOC3pԼ8ʝ犐3KVVB-(RLTRYn`gJzCeP3X#:%Ni"0JƺIjk/H,ݕ ;v&>59h̆,ůagqq\{iSrS[Ke4*`.-vWϪepx/^sQ/L$\e=޷v~q۔Ԅ#<~qNg"E|w9E.}vo4\P>84wE{Tx!Qi)XOsD:nW7+\c2b1mӎPSE:ZQ%6HARŽ{_B; -E\ } +)kS!qS=3>+B=x)*=g3xn"Co/fzas}uYj40?z򣙾/W~ސg\ݻ .놧w Ҁƕ@뾕+@(cMGvZŕ5 ĘTIuU؜+E8'CI]wRK> 20"%c+ #ߝL>,ְ*N>lԎF3Z5B"?3idgXK$vt5Q0I_F2J>ARiNqpS;S3͔7d !qrffZS /x@%8 k/+H`Dp#4 &%Z2+%ˠjmJDn/|]_),o;[k|'~1*KJ"^jɹT1DEZqˠ507Ya"n:[}i1yŵ~q`d=1jaMAXƗ7.NVd|gI1vYu=r 7_&Op{( _Ed^NB6l,$+YnS%bL$+vw;Qbu$+r4RH4;Y>^_G_uB :i,rA7Y|;B޸#odvV1`#[D# w=ԳmMܭ\4%SG !2Ɔ#iM⳼zpn#fw:Nکt )>S("( 2@)*-ԥ.n2*Tx.L年4^h34fkR:Jr@qRR?iH=v..HoK˩hX<0D˝tIRsU_t]Ή}+81PN\:qf:eϏl29 OߞfF5(ty񂃊2Ƌ`yE$aH׍GX>~1YZ?xjT:J|n :W"EZK~=vH~Uij=!zZ1tЛ- ޥjuT 77 T~.<-2jR>΂ 4T~:}bz7tWī|Tg.]VqYDU;__|^&K*U-~Qz遾IW7~t1&ZeYFDAwUc+/)tnJ|Lk~1]'ٳJ}5U.˱:!_-Smۀuc[)9SM񢽨Sem݊4׺u1C)~NWD)gZw% i7o>^WqcXK6XĂ,wc̐ƞl5nj(Jʮ{\;bt5q 耭9PmhuĖK:b˺xC!ֹs{OwoWe/\>==`Z-G wvLQ_b\ACATu.o)>zfxz0!l)pwu.uA.okN +5r3×^ wWjR?ξfﯯ(oBbٵHJSM`:j-};PG nEYvNMEh cQD5GyxUˏZ(s]ٻ޸r#W,vGE>LrAOXkGARCbRXK<䬱GQR1/&Hi>"b%JrG">P:PM 1CB>LBw:7ѹvFnyǍF8g{6{hGqiԢpjaǩҠӤ~(YxF, ( qM5Z;Ι@M".mÂFV5OS&N"q#BKpw3Q4xᨈm2΂:%-h;y$en8' `*9 =AxB⇻y 뵖ATf"axT캂 (U@P.%e.86w8]'o./)`T_ ,o@~䃻>N?#nq} k{s6g+ E/;SrZ-.$n˾+y¦T;I!Jy67/P!J`BSTEc!6Υu!O) чhzy}/ʵ,͂d ,RPInjjVrm4bdqԐsiFE09?Fh$A;k-u@S_#"vJwHZF -z۪>ϸPh#8#c+KFfGrb}ӳܚ<cFN}G1Mc{pJv C$oHSNj[3\C4f4P(jyߛqݰwKJ}e~#}Ξ*y7Ac}Ä訍L>Ab 6evs0tVDN8e2cvW׌a_!򸌶ڴ̿s3僳i͌k?xN?xNzizo'<jP׫JG,8 <Pj[ZJ Qf$<Ͱߡ*zKZAK[J(xPVٮ( vƎ)f=M!ޤۈ3D8 [!%vr[SfԜSZ3j2 Kf}i}n5Wټj3|I#&(Fhb g6/GN RXp!I9大'U vRyLB,ѻ LJ2z2"G"$(l05SCwЁ^WϚkneM?77}F5;|J,o9(D4W. d7"> s[mߞm?Oxfâ߱w?31=~04ZկTbcw?L #Ǚ=Oi5sFțٟEhwPnl. 0]>w(<86ϾةCCQ\LIM^!pKKN@PݍSMEtizڇHSڵ;)?-#[uY9{0uݔySuuz<]‹FߞlB'o0aP|?n7l~ wq2Qz2=`vM [I# y;YpmR;}|ȗHgD?"3s?AaE)$iʹܑ|"Jjө %ߒZQ@QDww4Q@+"i[Sf 4.bAwSb12$CM\ 8rq i&7^֌'CՊ0GG wE|Dlӣшmze1CC?:F!{"`ttolBpyw hkNaFG5j׻wHe0:iq ;7"kG9 L뮛}s?Mi0bw4A`T:wcsZ$ږ]}4:^v o['l^>yDyڲ㹗܀\ $v**$o afEv> 9;*:ݼuBVbPISt%&_[ \9p` rʎ p{NSIڃ:KqG΋{es m+).2& K|e\ǔG\K@)W YZ9צ0Z%p\P#%5ŇHzR չlgR\32i4>F8m@o,WGHd9zt=TOS莈W 4*tHH,")jO!Mբ%oX ^r2-JDqI@ksSY5 :fx!@C߯>N\.'SBwNHu_>FΈJF+, %[(Vj;V4׊-m\ƦH:fl?nIQ#)P )pg*KlTҢ5J^Is8wL+bp;~4jJ[ǗmFuXxwLlkf P'ģ_TĈ6FM skj0Y$c@6Ԃg3 ,]O?vqDGөmޏ^F FOI%u0)MrDҊ$zRxvewmLy̑65;8*wzqƑVV-d^F ;k\Mp%)x1gBPV!*C}Sl5e 6Q<ϫyf+$sRS)$=Q4J#]@H^Kf8%ws흫l\흺3Ja \",:\D l^A%fmp!>(NC$SR%E\ۗ/ըPꮺrI<Ryt)7IhlS yX42@:"tq˥a%NZTGXj,q$o(o05HAH͢4<kͭ@A{qΪ~-z$ܡxz7_,6/C?aO9+5͠\VnwK{@AE9xxG!׸5wz?bMg|?IK̽夕5^C{r5/ 4.v[` ϾMh$_3~rٹd3k\s=|/F'8`x #kHbDϤ5!'ѹv- IA=}߹^6h@Ӷ%Ԅ~RdGw.d$-*JV3z5o NϾ:"yҎNčKJG[d08K >G=\wYh( B'Q6A8EQ6!Bs9FxP{wdzO{GvO $pD` ݘ m`&s7NBAPԩ5L4L4p~Ý&LXr $"DGbhkR:.qD́+jtS NWߩ a~0-tj핓;2&1 l|DVY Frb"1YmE)}WyTIW;JJL4v֜$*jkg)FyF*PR*`~5LjcXA%.>wECx~[3:XQp30B@3'mDD ,Gx<'_>K) 7K+fDe=/ \/1ד,)xGywşZphcXY*zՔ_}gPڟ|uW/ol訕W:mKſo\Ptίr!P]6mB tӧRF' ܮXkӔM(Lbͧ5^Ho0tx{k3px4۫fv3ٛͫoұs ͻ3UU>:#_E: `Y`s txyFa ?,`Y r&ekWo*馲}ZN~ZORuJnt+|X/U,QVuW.wݹ0T|r"olj-Virrxq`m CYffTv+]` l.%+GHT ,A"%p܈oܔ4]%,T 鎲KcګXul!b)$YD6uO>^)-70."&G6eϟe"ssJC91f^F1,} q4Bi*454p$L: Vʪ34Yf>YnLh-^фbbӊ\KKh E~D<N>+C=ͶpJƅg>r|7NWj5sa'|*ĩݮ%5?ˇV'P)O*/:%OAy󍁺t}%ٺչQL*ֵ~te~4aRm{_WBxr2sQp2q2JЛϩ!7yk;ܩ?eng}}\Cβ$uFs۩V wye7=ީj"i!uVEW}#¬}!嘈 ׌`zt84UH$Z͖zt8MIgJ_#B"e7yn`/JӎxDxyjHE!TpyY)U~LRÜFA2b}א%'`DE69#8L:49DŬ/,tWS-lsvfcx۟(^^JANqYh&hFmJ@2 OcxF15.众_hEp.J U k~XG/y[EH$Ak\>Ěզ)3\)jޡ+EF~#¬Ӟ*ERlDk^#ɰ0;ACP"c50[x ƃ(5դЛNmj1'h2JC16hHc!88b*9H v JQ/$vg2@vBVX. TETV%E Zc c앍g*pv?'5"b >t PB5̧y8bA0%] WGd}+aSK?%LJѳKe V0iܟY7MR˸`s‘ӑI ˃[c爳*IT`}`eLRV'im*3 X?#A<#cr%}[|-P#=y+ p6EGa(_oӏoq4/6ax7Z1/+z0J L|kKwE<9J{uRBJVW·*OONi7X#'QQD۠aQ#tz3B2i^@,kF– 0 FzkSk R #q;!5SSBGҔ 7BLPvyxv96'sF1Fo۩\Mnc~[KPs 3a׌Q\'O䚕~R:'\ JA}gg AlAЂ}e'5.$KB끪鬎P8qDp|#]HS.:y;32~)P=.B6oŒf0Ky};G#AۤTuAt&;f*|z;/]I~Gw¬Qܨ*.h.z-AB.СXsgʻי8;.СDsʪPہ+"R݊lFiRWPe@\&sګ]MPa]01};(숔vBZ1?Njjs/+-KW>Eg7[_v \Xo,9z!\J?]h}!񫽧v))p_+B 6Jh:JԢHOU[l4?4ex7:R][8+F x.>4љyAA(1uY5E%eWY$2)YSe"<rD;ݴGQ{eEs!Yf?\+>hj(YXt"F{21׹JJsjuBWS,C`Ždmk\ԜB@Z=!WՁ(@' ׵;zYeHuH %DѝB)&@gY jۋ d+ :};0)ֿaϴ}1SvcmI~~;/w^&LUerӂJXip辙bP rZ@hΌ($hs (~./ }ѻɧ2iBܶͩgĖosBUݏ%藟,̄aA/ۄyE I`65%S!(pp7Yk#SHdQkvtKV_,]Wɓ9!h`GJ Y)Ts3lGHt*S$&x#¥SHt\kJ7RFd⎞ hVX X΢/*IB 4YZ挩"Rgab`0ʧpLU;Aդ-N߁YH,+9 XߚS8kl]Op2CH@0>F@DRg%1l)#x~ ٩hM E$KI_s&7€tzi4d"{$dы()l!񙇕BI UD[ .oY[7En+}ʯ#>VX+[ s/*'7h`is;F 9~ 8`EӲb׺4?=r܍CK+*iSs'FO8n%){y!MːQ1oպ00r~<"vHG6<:8 }BPa׳TƷ ˾hDror>6M-\菦;>s\ ?.?D;Ä%OQr5pyF'^29n9Öq\Az8s#\$i77Go3ńbqA51BQILeD"/`0.( 169p7 )."s=TN czbJ` $QF!/rLNEN ("HK**n,@{rIL9> $NѕEO5RxWt`׻"18 GZſ6)NrRF?凗véۮ"^>e9s|HG['bY5ܟo_?Lb̍RMAnSP'S,aQz3mRchOAB0֞JI$L78SwZ{% ([fO!xet๭7;C_uwx 3x5⧏ԉ |t-{M(Q)THS<173p 2tZrID^gpㅁnSEڙy&vSSAd#[4ؘb5:ĥZSM5!}O<>,g /ׇgVA$cDV 0"MPQ{xgp8OpB!Ā˞퀶z]{N4N q2>*FowW!5y^-Q{K&Yߨ=f/O.T~Iد芪Ĩ^5%z9AFS;^ZuOц6f|K{ A$p,w/jASVJ N ]R HFr$|3kwk\~WZ|~<.lLs?/on/n3uy%}v$A8[Jx(r٧xlqÅHXs^g XIu*,˴DcD+]skn]\$hN]lC5B=܀j!X42jcmEᭂZ)l!A7-(T(Dp rˉظ QPD(-nc}&O̥v̓} ww|'#r=Q+?Ay$WqllT$%_Ph)u\:S1hy꫖z]!` {Zp`9 ၼq`m0, iP8ʙ ;2es ;Y 9ݿdS]Vu}(Z-fsn)}|-v&w4"jy.(E9Q -g’ޣ{u'Gb2 ,>{8uLCf#Čd!?!g w1F^<ɞfng=|${g> Fܦ&ov>IړnI<\fUVpfӃ܍T'xti08ă3/ c$sf7b~O[=J~X9j=3t>_u8@0MQg`S+ivӉyz{py~v唪w[\:4) ߚ|E= s?sgy71|>YI8G=/mM݅S۶\XnĜjBciSn6cɔZ dA0B)0!(90DVEV)1HVt}N13mEfvoz֯3+*O9!|^NkfGY~bAO1ϯ__Jhsen>EˏPc3W$l5ME>Ϝj9l?q0;[eVty(p[XSy1Xo|[Phʜ!ĈWôO ,`J[Dd0OՂH.AT4ZPqw%cNw=2uw?(ؿ!AD^cbsͿocγ@!cnJY1cK:ҿ$z! I~.xM'04z *N.Ԯ)0-.&V%6A (iI rW2HKPF]iN>.- RyP<0)D31HlOI9ӊ/iEcd-x ix䜁=N PlɇC{ Ė8 Z !ӡJu,w5q zxbl8`Ӄ܍DU 4TȞ*wa@EI$-m-.lIkqn_ݜҺvsW3c܍ueq0T]{/J4BRF8tX&!3c%* ! Fj 7ZJAq\ _gEIPxrQJZd% ~zXCZ*VN,ov\0lW|Rwo:;ϟr9Ha$%)/4:4+FШƃ!.bn$U0% |m*,#z8b z'ע#:-}){Ze.>goSDg%}Xˣts+%~ \ H1cGʄ4#Lr+3 @+hir<tc ʰy؅<_M7I(iyEc36r8qA~/ WD.H-,rAt]2!c+--R^:%-$Jұ~Y\rrÝN;eþBd*$B_rBB99.,֘ipqJf9%3v1YYI ́ 9TA! *'GrXqHȽ$2 3M (d}ضj0asmP[HƛѭNy N!^864JnC/6(xr{~S3Nk$CTHpݛْ}Xu$o;wNd}Ys#cgxtX0iP#z*e)H W`zy tSX@18-e\A 44 B .)rrAcn6W`(D$lB=xVq9nF@z# *eE]XE KAų4^Vr_};i>"|j^  N*%Xr):vP_7 # ATbT# >r򢣷 O f 1eaOw; 9G:n>ܚjiHFCέ m ZkrPR[t`\nd*4׾nRvCSntb#N@HSJA iwo9JM1I#nSϞpҚp2~r2Ԗ[|]葘"t6vU"HcIb&hMr>t 1]G& 7nA; @ګ1rp*wgf0J^wybCRCD!gR=ŷg8EVR\cX?=$jFR4 \Gk純ѥN-znc:xsĶϥNJgp|gT (4b95h7uDE؂UthA|Oz-~8?(q9_7q50$;.1#0x}ا00q8$1(?Bi8~SbUS+Tc>HQg2O棞>3+tBq:kt}`Lո~RC%s>@[݋r" $eM1a?.-3:{Xe<zSueC'Rp'8/k32x$K.P7'g7iwuWM :|I;9ynѮ|OwmUbKt3'v"nmx%l_S ]hmZbC?ۤO㯅]GζhEn(`S9> EQkK/P*3RD݌(eI y9/hf$.e~g{$$ *)@E'.-2i b(Abt52г_\Ct݌RiE'?_b'S/|Iˇ U4R106Ix2TObux/^Njŋhp@3ot `38 (,ؠ\\(MSbSe+K5X6zķfp5~b''U7u l+cٛ| I &zx嘃fR쯡duʄA /TP([:JP(!ޱN|qO9M4'3`@cy)-ՏMkO\C3jHى^h_5z/joIi &7jc$I-ddkP pūٖrExXd5TaUωi)tN*.cv4uTh?\b@(SPAݏor ʡ]MRZ~t6׏.)7lSRNh\IfR ԣ$m4dSg\RsiZpgi*%tԷlYˀJˈUSI,ḦB[k )B3oB`I3\*-?b1WZO:aDRp,ST2X'Ւ =je3L MAI+1pMIσ7OӆϱLjSmQmHQp]JpyY `RKM#'Tee􍨂cRDЩJShMp2Ղ &˨ NFXuصdڪ³Ӷ#8{h1v~v*@iEo Jk=OXroXBh24udHu*c ɕEC4_ <ZD4 Z%)zI,Qr˔S.Y B_;ppqlFBMDl#hzjDd䭧j+` =3_u<~ד1`!Y4 ̞):tWSP;5-{~q>Q?0 8=>CLMoKz ˇ^6'3I4r[;|7Ɵ C|>ej۽۫P"D@,SA8nQê@E B:=;pƶ;CjĹJ8WF2Z:oi_F7=QN72bY{;tَ^ W_ {׫~LW#"<({͌vrz3A BpsS/ a_M RE9!!T9sJ nuCh'emв†]tJd)4q(˂LER-hq )]O&"Ue6kmvdh(k5$@g[f 7\uBhm"e7јv2ekP W 4i!_BfK<4 L[_檁!Wa'P-_H k{|cG";8qؚAE&D jWU(B TqҌj<@Ie)*7 yYḦ́RUVdz=ˤb8V'e0RTRx5ʀA  ԣ%՗IV a]3 j t >dAe>1i`hja35ŠI!N55(: J}Rhn,oڹ ( ^ǣl[ 8E<5<1m==ȵ-F;e4:`>F*Ḙ4^JJ_.qArwUc.+ \S(l}瘵gJCSBdkWUDU"N 0F x]fV!&d8 ܽe!H^9܌ un!`amA|\ӕ۸},YčAmqS/%87^H1+gm| VCAndN)gZ-m:R\348_sn -y)I׶>EFrmEWBbtǿ5TMۀfg|dDLJq̐m@^e%7ޢ PPMp]C7w_/4WO0N>)gr>ó?E\~Wz{wq0.nO0mn}P)G9V~ ix{Kg EX?w1Cwbīw்1&Û53(4cJy|8 Gxr4zi2zU# cxLpa62ìs7ZX:h,S<N"7k@2 #5uN`28\M߫ԥF+KyHu!14VRLJX޻XVeRm,DmcnO[])wu =L+¿sq^G{{8M/imw.)fm{*B.MW}_j:BUnʼlq攘÷<|?X= KUÎSgys%Z> B "RKA"A}Ī>=G%}ĀT{y PRG"UMbTUiR2"g({ ܉ s[+R0!RJיW{F'Y4,N1Rq|40EO ؁P@B0`'gr k˂YZkDR2 y/"=]~~e&4UE\!d򹹊RDST+V`Gie^qf2Ͳ}r0ׇ憐̨NnceO֢5pjtI-}fu]|TF 8Bʷ XϹ C| ۣSbmIPg.eJsvBːBu]ji@ʤx[-VIQR+ y܀x 3Nq\'jgˋDu7Kޙ\Dlܼe5=A\R8tΉnL"S7m߅'I7h CH# :4I10GRlͼ) ]yidqdͼ§V4 \:#5M@X' ?{ܸc6ޏV]6L&aУ-$y˔,ZR)g,Yuht xF,%Hs i-T D+ cE2#jځKTY9FWe+K1RaBap,8`S 0˭Ln;R4Bx8g1W׿ٍP}d8;SbL[4JE9ć5i~' Cғ b yy<Sq>-="8$x6_(7C뤺jAmmf_aϮo%}̛?)e2YuavR}8j-B^OLyC,vq,4ţdQe\2ͣZnO.Y9j&_Oɞ!o xǦm3_ ؊}XOXQ}9,Yl-'Cg_B-TCyFob}>9lh3e }%>|'fƋS~:-2֋Nu_JZ.!iWJRꕦHYWЬX[͙6RbkT%ܒad)RG[i?GSMfMg:AmA&Q|ex"!X܏251 dBp| Squl%I%ae>-gKYZAN鵭w{+ [NwѕprИ26Vb\|exHKUG+P#P $zqO\*uqSg-[rbyDt\-8 rHs.K_Q_,‡NO,97bL͹T-Ŝ,bQ.Or~h7# G>1HLj)cHk}xOaxJDΞ=kЬ]'xLxSDznQ^Mu }oE;˫Wfx\Muq$'XIkRޭ <DzCYa  Sɴ)$ ͞qgK|=3{N11f(Hc5L ;Z+Ğ{!l5 R"LKG( Ƅw<ԩ<\ UQrd+5.AI~t*6&evR(_Cn5%IVM!A(hI%v2 ø:+g^߼1R̙P]J*a`vn;yt)/0~L*lQmJvn%OQ~:&C<_ޘZbmr?ug4enbd? 0eC'7jCA1-*{&>Wނ0 x$Q=DO arKRE h,v@fʝٶ lEAt(-?R\wA@{@4pSTR9GAS,6Xp fHhKuE+T(o-7NӒWNvS[r2#"y HS}lM(idI5VY*gǣEE,g43x 9_(cQ.\T  i9(1dAK84\ OOŞ4THVwݔ>iB% F>6];3陣QI=e ZhzOНr蚋ӕIK;ST<ƃlDAcF46[~ώv< " N'ŧOqv6\ ~: ˜,1UiQ:Z< VR, vmC`U*G 1rxA(x i!;pLb+rCY"vR;=Qp7gЛO;f6j '>PR|{tOkT",Al8’ Yg*=-N8{FX#CzɊ!)T0K"oAyY8 £2dZLC5[PFd 6VϽR piJ=}<is6.:daZ+Gb hcZkq\ӌdɠA~j2+'rGB9\@dƗ_c&1txRD#Oړ lM 5A闓v&&q5 $cXeQK;5LmNj~Xe^CTE-=1W:T҄'[6HJ#QcY OkE, b䝣 a&=iUbTlhM3 i@DTx>WiA%;UT0?wv3Y\C=dak|m;ӯ߾뛌y5bHGؿZ!ʎ  ktl8܎O,l /]lƇ[D]fu5чK7U/e*ޒ˛ [ET)i$9p1lj9!1cJdsՇ{sa2L̤$̭ZU\>wXB1P]`Bab ZbGe5^ͣyp}]9qW"Q7$(̓Ep{yoU:NwizdMӜx Rs"uJsc I%(-F@Q>yXC]u]j @p6/+JxV^M:@W9~cH~ÂxA2 V!hz|K_ 8U&WgJ-Ybk6h3Q+$UcuCNSZުGAK-NPݵ7vzi:pS*wfg3G_^b쳄KQǕ%j1wH]nQ/wO|×^ J ao8kLNO arL踹&FƸt .S:r;}w>.n WZ чW,?r&E/泞}qSzmAXn/ ]A׮"{ |f.WwL%M+4{=hM9ҫ 9Gn. aYwiӊ%(/\ecnET~9ʼ?E?!@ﲇĸw}T.}4o?(Ҙ5Hت3W]6nLH5,mFgt&C'*4*48;k;joI`N8>1O I=RVL{Än͞/.u!}r3|g< ~1fNLP 6em$}|NSp`x.+p]킵ШRS[fH֖ fMfi#2֪6 47N9h}Fa Mʦ=A6|UpCh 3lbKzmmb1G _4ua!߸VTXTF( Q,VD_:"+pqr5oOW˖@ɣ1,ߦ?O!>]nyF.j@Xfa>)L~ȿA(WJv19ֻ+I42 G1E(F0vS)l`r-rCY KsHqӈ@⏤ WΌCPД&&UG+2>BZʛQ~߳=n+v$_]PMCn Gɰϡ^Qx7*􈉔ݔh t GbP2b4ݠQA|_ZMMrF"xڻ,ͨ6 J00Xz7#rжF96EItӼ5*S{b ńK)pa:scYVonha mȹ$1'e@){£=Iy>ڛ/m hv\iURi )4KOd18B-Uck|L4c-Z7yl-ZSLW{履?/O&/_QuI./5."_طq"?k~'qXa7׸<>?[|GF{=j n]48`fOC g_ q8VXQ!V/.6mTQs Eqyfc.ѳYӏ1BeqJcR2 7 fbU Q9hd6AS]| 5N5*zE!zH2rF.扑"cw$ O':ٓZK͠YСDjtE3C%;!6Ǔ_m9#nG釈-#dx ͅo2@j߬Hmo>bZ JJxѦ-O" va!߸mSR)pOtɡO/hZʖ KɡbtDJ`03s8EķFD3;EzHuKF[0crchᝓtU'8# IIoHP;,ݭȖ{Ь͚x ;al]{^Tr3d㝪z6vEB%AOddo^D^@w&wL̳tnxW \˧iB^EA@h!!%`WLY ),K#1MpI+YjL'Ka֑l z^,SW:IcSOsmTգؐ+id{8R˅Q턴3ӡVv?eS맨j@E6c.v)W> `RAgN7Q'r ȮЗAabߊRˡƼ!Op1"Z[(vFT剆M~(_Ea:LK1"n*e䌫W)6h}Kt 3j|X/0n1$W-E f4}N%n)A* PJz0~,ߕg8?O/6J%8=9t gJcIgj$"Jޘha0TauFO\M$g|wٺ YmcNBy(ϦgK'&Ձci:`]\d;/=8ݞ`9ДksT)ΨN!d/}H8:Njye ݟ9[x^>ږzhz.FnA<)F2zD}}i *UJ=xtTIgJH{PJv)jBJZD%x| 8))-W>&)TQIta{Ƙ 50ɺ;R! ÿ*ӄBn1ڲc1h՚P[4Jh %"l RDڮ [F,,p?u>쿮gGWUѻ;ޠ5*9ǔe0ca$9 c1hXhʣ}ٛ1unWcjH|"2u^Kƽ>qk8.>% sU<._N ]{cCy~5s-&Jpآ`0yQT8Nd]܂*Ot;(RsMe`8X'tE:&L9tq)[8N~67BfÐjXP0#CHm:򐫢eq.j39{=92wmH_eq',L8ed'kɹ._Q֣ev_ &jU,Vx&gR[. ?mJFT;jt]i׃d A@)b>Ԯ*1 zHoIP q>P%WsBXsEo3Чd;2g8πr܍ V4ey.U! F![G($IE. 4YL! rNshqrGRsL:Eb5&kإ p0IF1`D ǽL*ș"Ea1%U J6U3s!i IAVIu,3sE 4Ya4K&hʽ1%h1#(ϥp$a#NOx їr%5*}"/ZJ1A* 8IRo?YgHރ 9KA/w]i9iIm_ZD$P"ʤVNdj \Wc1x-oZ[A_wHKi I})vӯͬM|8?b8[ćD]1Ƙ!gНAuK~u`.w>Wf?Ek]1NWn4۩F=\Nu;f?O.ȸJnm[ۡӚիd罳i8$2=xonLFx_=;JZEM=wŪWz^ 54~;{B^ .A4W:ED[z%8ť3@Вs1mr;hzaDqp麚wg`vFL @ (&qUs$Hz@e$H=JSakzF\ (ʇ" , (L yf &ty88\B'ԧAK}g<>/~Cu)ͩDSSFhP Tkc*bN]})rMizOڶ[WnU Yd7;-eB&`#D2 c~/72%gҏYPMZS{I s9d3/?ŠR#iȊdEXvNkchؖaKƨȁƶJgNۂWj@3ZRƈ akz_uL+3ogL&?9BvrV'i^<w7Z%-Ф?F L ~wH+cQp B-ZչˊhZx/q>Q \-xӘ8=^5^A`^Gđ Nr{s5VՊlO]X9Inqkֵd&:ŵE8Å,<n3M FtRǨ݆<#Tk^ 1R!!/\D˔ᔒlMֿ^oQgϦ5EZ`}Fk V17h~@V.د@pҙR:Ň U+gL4h_vVMWWmvhպ8Im;/@OH%e00*5cR3JmUNJz od7aАs9][q8E'SuWuWuWuWe֭߀3(2әgsF ղroe+2eYX&2O~ e|{;y»Rz;Y_pa}]EY&)9*,n?,1Kb:DʌDK]w,%bKKU([ yWfM2z=^;P~puL]=]oC bUIzkCˁyOD{"^Pka_ZHSԂ*c-F-[Þ_ULm cr.;m3Q6+ (nLI#S-O>>XyDyk3G){lXiD7fX=;Vcx2Pqy㚛mƽ5鯞x3I=yM=,4H֛j,\ -]M]N0:`f>D5^ag((u%ύ. znF(p+ qWLpjse\*%*OT.p6i\Ϧz*e@ P!QUyƉu6s@,SB B=>I"Ϙ@mNq.yr [gL:itasL6ks.xlJPx\HBj]#5䜉f9UF49a"NQMaBXkLơu!J hKݛQ~"\B U \ȾH7ZG0*I r TH2#~e,{<ХO%+Jhm/GT~XL4^ߢqvͦ/6>{Fڦ҉: atx5^k)g煜挓)ϙfҶv{Py6V|XBw1OLǦEa}Q,v&~wSs(+7 ]?Qc8jQ؟ f'Jw$GF$@SMM)t:@YINεRu :@mH $*֍@q ᘊA$QK vx{nn-H =ґ2/ 6E{*W߿^hcVlZ@k`3~׺tWJ j뀬W;{qJwO)32:FW`f \EKђf^0" TR*'!F1a'Ty,~wb}f_jlFC ԭ75j.9P6u;]f ňo=-@ 䪼@շX*S4˥!P#(ɘ d91ZV[c eoT}+u2-LOR Ue1:&&}y4u:4yQFju6t!Q wg`hQPmm.=N$.q6V/1v4]Lut% K( azTGptZցi9+ T$v 1ם!->B(i#^kr2O)(p T˴NW0\3nLQ|Ye4XULgZ͜3/snF: ?CPFh/ZEJ4ll$/֦Bv(WګU/ !K{ A(rr7nҾnaUQLNxCU3xl.!.t!BQB{HF3 A{=Wy)-zJB\8+%/8pkHNH3KQdBۜ{ᚔ(]R87:C#/E# %0T9)hi !kqnn׫° (ԇ<${\'e ?ZN;X@r{xW"g)rf ?\Lb Ϗw).v_;li]*7;wkd>>c!Di 5Rs L\bL>Dt㑗R Z<"W U0EʦH9DWLD1Kz˽߹7Iyk)ŧZCdRJ I*Qz8:)'E)βʹ#z1B>\!6s!ㆡÞy^(Ar44]ɝ j!f9?haWry/uG)c p 㶄r()H!zbe&=4G_ m>y)h1 {550OůbAp5,=^%p$E Q]bYw㱗ƢR[,FJ gN&cO:_ ќnFQZ㸐+pB \)z5\<+ f4mD\*j!iW,crEQ?"w60:C,Աiܗq2[x_%魯qlD{y%(r^h.3VLI^Y &]/ogO^>$}s((A ԸMǏ\NAҫJ,˝}yN)^ BhxZ}+ kV)b˩1p`JGPPQh&x&yZv(ȞE $=_褄3P톜 /B!P Y+31ÈD63ЕWVҤ\@k[gr}vE+٥LNlvLc h]cl$uR&Kݗ[Y-yܽ71 s ɘ/%?B\da[ u*J E8b[[ +[XXLtBxQ-Dgpk᧗T{nnG1NIݛobѝWO[LQvv_Rg: {k&JV=;?ż 2JykskOήFyċl/ ЕVc\d0o!xV:GNNt</!Na,}N*=vcͅX=^'?}\`$2Rg2@)ntdʐ3jlvddKP$EN%ż"Is%Jd)qQ{7ϩqu bڎK`Sދ*iXc޿ C[?u7iũ|4QBUǗTf#@gQfd|bE}zz[ƙHk2+m])(chUP{ghp3&Z5'noGh蝅”SǧV =sFOߺ5j՛4-=WuDu Q-VwtVi, 5pf6Ob%1qXu~|qqplxI%h\\>|68 3>kA-hY:=J]ѥM$W]UG۔ܽQKEUl_Qm^4T\l'`)Pqn.%(Zih,2b©Rmzy dLO9:S$K\9Ȏs~Ϳ*:a(Lε6 m҂+B fVdEQgvȱ$2ӓ2ŀ k0 .\ma$KJUQ>ddJb.$!+ Ösr理B @LP4 ::Abcp@\):<ɹ6 a TwkõNChd  {̅0CDpDCG}P -8C58X é9DTc3e1M3@:ޘk rfl-wnY(c@vO$r8iY~ۋE[NKp>$!/,rSF XP L[mn]z.oI`Muq6]ԧEW1ǫCPVWfURQA`R=yǑc*9TC 9 ;U9鴞 Uє;juڞKGFvÇUV\8v,yH?})ha5E8=c8(&Φ˴n@hծEaAxzb 8\.钁_C-+R_2J- Qzr 5 @837K>]rTdBy.@a-\q Úq(Lu8(, bT^14y#EKoMhꜽX򓻺=Xw lv2 MFu]L/A^pg0eNj亿'UJ/ ɠT'T %} (%գ=PM' l\労м+F4+PT">tŘq3b4rFr gٿasHvd~,^|MǔSy*Ma_S:"uuK9@j`:3AS)zW3m\ GWxܘZǽmgμ1q*Snכ(&vrx>l!* -Hs|Xs} DD=ՅAC4o܍RFǕ{׼Y0nxs1E\nf*u&|o.튡~~nj vzV(xֿ7u?r_.'Kdn@=nuGn _W+;޳J1{| ѕF{!/V$Nu%s7d{VANש}FIt;ti.x$ۜ5ż"1ݙ;SDib蘂~{zj. 2IS/z|S$d3ǃ#$ v]ixo~soyt(NZ$ ))cnP=ǀ'fl 1uPϑC)P"$д\IYwL-A(tQQavSZ P =Gc6(&P.D;`j0p$Tr{ԕ>ƵK5P`i<:Ȭ"4CC+xxJ]`\e'6| [' @e7~ܔʁ$W5pTE k (V@G88+J5c-;c1 QV.ޱuVg~&CftP߱&'XA-%l#>|[ ߂a Od u$= f=۷Nڻsul™(VڿQo^./_ũl֙Ϯi<scG^ 7WܿprnFH3eÎC,1D~SKye̚hqε]:qe}g)THk,f0*@(N b!-5Ckllljz$R[Hz3emo\w3*RfRQ6!! `K9RAT(ܩ$(Jg QB,Vxז:ⴵ$Lx390ՑE[LGCVr}kb{˺`w7rXGo7>~бaWjpd 5?_560mFG~\p5" S?g_ޞ]a27gS8oxMR._| Ga'T0+gWo~UCL0&j7qMC05@ p)GɮA閝m\N[q2g .7SQ;2l!>_{Ar-Ϙ4LΘ׫3Ni `HoR2ܭ0Pe4YoLP;f<{ZahSb=40=p)GvNߌ%@${?%.9ȅBwD-zC2caӉ8KvP1')ݧaFtJeۮ=2v0e^HOrxǔ%ke !A#°G Pt*8 ;ET6yPx-P%qĠJ˘CD! qG(@Ae1X̖ Vb;dzվ -Zqc f# X94sC⍴0^J1"u[-8pˍdв)8RܺyAfSl{Qx^hs5{n ƀQYad.v 00zԊ;M d3(Ә-ᵴaVLa@3<NB/-qC `:Fi#OVw70oEJ(}F cA_T%a-(-uhbTl47 A<㷇B4 b)rRsM|?I"atD#S#G(TGy]Ou< bQgۯ"CnD)3eೡ_xQ _(OEO|5O$UFGA ^Ie1)X&E%VHoEjx{QNx#90Ha!=k0nH&0A BK#B, )BUa,ex*RiEZzS†etH g '4 )&O5)A|GcOs)2=cDN.$NipMxH?͚ߜEb}7g<;YH 2L< up'3Ȼ/\~g{G34SE)o(>: O)Z5\ͧ19poӚl Kwo5y3jƮ?ۻnw˛s0\ܾ~ syׂs*ՄݫxEdWi2捋E"${-g<ϡ3ho7[ i=o~_l,V`3n:׼gC18q̀{@f,e5 Zs\azz\k3':0Hv [&Ndqs 84.)?p ~hsHpϸܾȗ9 q}"3xGq>} q|YO{ E8?$sF0 zԻh6rm ڋ s^b=6\f]kQfʸYxŞѽ&?d/K`D9̉vsi6"voMt"8I)?ZH"p\@F"@)5E,N0!gش Q!"ҵΆNhK >35[qsd5I\L;bګײyZ1\1=ix4 riQ_2i!9&0!Lj$ IU!$Q\6Pf8+llfz 6AG^(1OR(ӫoy󾞙,3TC<үl )"Pp B<^xC SA2bB++[)#_&Kէfx?@1}3*u#h@3c%;PD! \,kst޵mcٿYl~ȇA;XL03_)=iwwdKVuW$)>J``ǶZ:򒼼Y$\!U! 5Lch]&J44̔B o^0I#ʣ'{kI-|MhZXYCDJ7MMHz="x:1p)O1 > piʄNXz3(U'§UE[kX5qVIUΪjerZۦC`"W50+ִC (BܺO޹~eCs};7兙 &{OtK ǜet-<3`0fw4bz3C%` Gh )W>)C Ѓ%aPo_0vޒh&l<;SbAl#ƒ?N vl}RM(v@o>?5fjW]ݙ9Ӄ`]yǛywzٞA &ckyiчǶAXwKvPʞ,7ճzڽb5yqma'}_Elm l1T/laR=̨4 ,CtٿERrjFs!8#H9^<)`)9NX 羯0>?]~\"ZHi~? o|y9C:]ov;khE>o~R7[^nn9gL|Gx;T2G3O-Utԏo>g)=:Tmӕ;ԖfnV56mP)ߪKS*Rm=Ԉjh[I4z^QKEB[#^^8M"ibY??nշ8fj(Y}sVwpe_$*fNIz 츴86D|)&M20!EcR~JJQ4"Eւs!NXmNp#EWs |&yWB=:6:i |d_b=."AdR3 A`tBѠUQK)a ЪPBZPH09bB+XCVVQ ِ98j"cp_O#̂C؏Lrd>$HUmkM oml1rZ`kV[C nOE{ 6Gɺ mcQIIBt 8'vOF1󞆩tBiIt}Q[BʜӼVUM$7=JKy hG-a;m1xw]3eG {:_d_+B[IsZ7.#0<E #P#VKINcgİkZCM-$hbR0(ZU$;գ>&[8^UhexnOw^,7b1qgD 7; ^xH33w>E!"eZ (`dX/MK˟wkBnҤя_XK.ŋ;PpR6,q:GǛsN$pSɰ_$.c.&O.SD$dbdlFaD4uԿ%DPb-Lx_Ce`-zp_7\^w˧l5/ W[[[޻Խ@ 4Ar\  VUZF!CU¼lyV}_|~{wumDVzzwyhn-YMȍ~`:zoHK|Bc)/2·#6ŭ6 r UdQa-D'ڃRۯ{޾E&B+,lj*xe P`R|ms[fQ! jcm[W$.roFG='(]2ͣ +7(#h?F 03I&%AV\"T3237TS3 `PʞTLffR~t|vX{+ Zeo)r"& ,e`d ~ :3 C}^jL#5 :@w B@P…c- /O /VW!-$  3m)a)Ijju03=T#H4X>pF1Χ7)@ÁH4{+uDs0Ǫ"no;[pӍ۬}uw|P&wXɂ{!ے+| 3䇕S.~@|!$&p}β9Ka>x_2"Ebxw<픣n|ÓyJrRD3| Ǘ`ֳ uO8)y|!'[62=UkyU> o70S `]8"ҷsQqv>D&P~$}vL-, ܤOQ? Ob}㴄JLH"z뾺8f A%1H؅^6;/.NfC\\H<>Mwh/n0OL[FK2t!-Ӓ$8s4 1gp5 s)N{E/ YK{LtwugfwMcx~W=4C$BFC1PNp8 ELU(ҹ45YA,<6BrT8$ŶAP(\d72+ AA-}=d;xd3˟mٽx>|C\QP4 i%8CZIA1bќj+mƠCjFTR]O>N*VD{Yٯ*E|= >˦[Y?FޯB'1/g$Nvp*1.}6W'1FCB:Є-׉}FHt.rJb/Rxșx" bz.ٖ6 h8Ug"_duvrw ֶ۠k_߿M zXojaRn{+`Ùll|NWT@"*Si۸c=뫯fn>}j~}h;擓dRnQ\IH nsIc.+V^zBzVF @Ǧ5:mLxDmp_Um6Zl|v7 {-U1>W{vڮۤ鋊S0rt}o%d(jn!|sVoKuVobsU;GF@E8fxSƔkM2 #qJ=57viSI]q@4ZhKpm~iD0ZA; eBA`͎}l Ǻ)_'=8%J$,F3'@Q*pSS`rlZ⡳ Ovplx}ЙGLk6gx}0iy )܉ ` JQ^ t6~ ?Z.ͥ4}Ig?x9dX ȄA cw$g!CrEK xX7v{Zf #8m ʅ1M$9zDt7}>z\Rfi +s*c| (K@yj/R{C,F *kp#zmaxOwoH?|7y @חL y8΅e!vm0uR&FL2\(_4uߏ.8(u}Y"` 5gSf8E4Gێ:WFW5A/jޢnBK}~qUfL9 EYLw#A=:J?'=g-N¶sk g(6Lg 4UigMk<;0"3pϜ%dz{v3 (~Y.C_YHLEWیђ@"*$:JDCRЩy؋o@HzACqZ#@ 6jq9r(p&%)̊ ض(/?_C{swѪVD^}h[, RKTI)LEV +Nke~p)k-5TB%Xuc݈s%i B{~ 5_/F>htFi;z˻x~BÓD Yhʔ$5qRH XT֚Qֈ Q@6 8ԄS$wMv;II U8lpAF N(:nؑ͛0b:nƖSAEn)<]tO1gV>:Tst}t%/O[+s@*޼}0vVxtđ:_)qA$YR:^ePCSw.gzCZA撝L/~QQҝC˹=GsDZ-L,땯mw7,1%TZ!Y 'e,7X? bO @J/ i &+@DA6!6-:pg10 T6 Ox!篾mz&5T{1Dx866 5Okp9ROoV3N y9z"*T9f0)8*uV\N .96tKb mh9Kr J/VՑN +2,$X́P")\rAO 4`!.Qn ^e |>za 3[=\fS˴ܟ ;pgdR0;Tg M4@ɕ^/i i_cXp6惐ĩU/i "uÏ3#I/ zXt cF=⅐g;Ih1"%*VY\mPY"22"3sB$"-o{8;uqe`׭L0RH,AJo%?,3EnL*2wFc{NW26mwoooM91.]eJ T)Nzz$Jc( zÑxxEP}LsH=51`)"^㨅LEw^Z"fyXC*AEMdH[$AFT.j7P2N(EU('3?{R[ϜSNo gpVCi@Z7V{/S?6O(x@ӯDQM Yja&c ty=|7߹6$6a~ƠשK%75dWϚ|kfMLYeޏg~i׬6pLH_.P'2Uӊi3Q_f͙6L?91/waY7v=ʙ'ee4L * 4fA^5`f²7s}7ylL?ox hzu>MS8B;^'qH#"l}x_S o>Quݚm 2楝 H1no٩ܐHγPwrk331󻙹e,Կ?)du21"7A߅?{v>1AA}g|H)!3>7p':ffk"CĒzQy.F*RPpi43ПwXVlI:} n)?faXW~B(lvCĘCt"@0IgTCRMI5=US^L(h4NR+'dLVEa!!$U^\>~ؐ \q29T7VX"QEbU*Xad^#zUz zV9$ (0#w9Oq0+%2 @T",8)Cjq̮0 -dLb3b.B-iJ+% jwASoZD`]O,[`8I-8bCacЌˤ!* 9eBl#9릶w#Qdu(TqSzDADApk4H'AQ8am UV n (@ rR0J;Fѓc)ewg`:wFK>͇C>6`,:W<SS{sh`u^Oo{J Y5I{z[/A(Z_;Jm$Lg/n`eGV.:C`t ޢ&cBrʅ(&w_C^X7<1D'3}0Tg1xZ>3=ڽtfnY"8 `*22Lzس# 2=(S|̆J>Cbj# [ $(m %QX*J6vyI0KtyA A$w8:e.Pϔ G9Rӯ~&ڒz8]2ܲWm0%TzK QM/Nx[r_ )bmbgYֈd>S)P9o{x$L]c6L-(3t稞~YL_X}a%!ଙȵB៕SjK VYREv"Jy#&Ȭ^fNȊMͧ4^{T:xuNb8: u$3 hFY!`$P "ɀV  r-2-_Oo{zob}RVqyXs%X=h1Bs䧺>dEUWr[DEMжcu+\v_aInz+7?3y?}{% Pi> ?[ph"D?0nұǬtY*e4^Ň-e.hw0m?L|vmU[f3vUcL[I[>t;;KGLJh>hYgke RpF"*'ŲfoT}Ƃ4ts? G:k?Y:(v@=ėrT_,E$pMEL1z}Kq[e/ݪbPDt>v 4WiVyE[EdR ޸5^k."/xj+)g-^6n\vvn<"yb^yË0JI6e3kK0Icv,['ii"E',)47M _Vh,Q8 N-FZ ¡Ee8> hn mx /}aI=ɘU+Bv%4NZiOJezŘ unC5SI$jʇ,O \ -6wFc{ ;?)[[r z߽j&]>6 Ќ]>{ZkDloz \XmmB2֘ 3jp,0b%(I%s| M)]-ر Y 9eW\7 ie3M$TCgK*.'XlSC B Ff.S^@=R*9Uxdp)rkUtb˥{̉+)NDLNߒ]uIF4Y8=wA`orz 06S~)VT%o0)UIJC $ILV9ȗ3n?TUN:-^%\37>$T]oۥN>\iHGI)ǝ[QIx[VYWF)L6|pKv tn#ʾ<}aJ1=mUR*T**>9ա_ 'P-^s..@ 1-w^F=F?j>F/'-S0q02LbHRzXRXp~Ly)iNd.*E+Mt3@ͤ{䈶jq>6QyKXۘkD\i[<K\f"/2a]6ҿʖqcI9c/{$0`{\$ex{)k̄BiλYHZ9!`B絰 T"F{JPrԯ/,@jD;ʃ:G b`b1NhFeŢ^!,W(*hEiCx8no1x;%} f0{L} :37_f/v&!ΧɀΦDo{vWYܿ}f&x} ?Av5+9պeڞ՜cQ*`Da(y%t>-pg`M^p)Z}g>.FUDtgzߓ/ϲF0x+[W[Gq[ﻷiEm1gS1Q;uY࿐VS˂5 m&mS*LR2ۥ%vlVM6XTaUV%QK_þrWqNuǶa+*Ki.DwT,xa6 XD#ڳhB:$# 8=r(YCEPs%lK$gH)P NȐD.!p@ &xOO`L T|Vz9|$GzFa<}{> S良!?l.Jx)F/ , -XKUb+ID>< e;>V3 {5_m]R;Oto/s.~EHL. ry%T+EtQ]+5[X1#pKƬ\'\@X3MO;* !/ػFr#W .dW总pbIL0X;Z'ɳ ْ֛Vmkd3b=bʮҜ-7tLW2)TQlwyo|:c$O&o/1v&Li!=Ѝϰdb/eȜaWiԾy*µ @W-jpF46D 6VKmpNi$0/ԣǛNZgFݍq1X|;cjDE"ylOCLcXF! ?!2\ })DP[sL$Jg>mVS40^럮Nwuʡ?g8ʏ݆Py{!}NqbzHvӫ yتrùMlb4ĦWumlnyU*t3wP}):α[sTܨ8=%Di]y~E(:]qaB$P}ɵ)*kksQnr[S{}sޜUU)'/tf ԧO.c葊?dMq w~7_RϷNkvT*[ן'lfG S3aذs XILWQP), D@Ì6Jxni0aK D[adTxF)Vj0x H=f"Y]ˡ0D$%un"I,u7QF(S)ɩ1 {'H*Dp"1JH]޽/eJh~e"2sIiZ5-7kJ*&/ݻm4XLu)(e^8$2%FH.1}o^]=4{8vN ;>aӫhЄqfOBU{ùd * SC^<'t[DJ8QI{SKDlhR]9gw TLr_߮XV5@ixHMDi2Jɫ7~WEvGWVx)y|u)pC 3.q|#xDA$%ǞS*#6r`qFqL^ӥuԞʝ:TxA=ł+8!Iꌧ&pԘUU$5a$ K0ke0$T0X(rycBtKkvQ N.LP`5$b: l+&7ӧCS&$5otb(x?% WD(icq}PBH.U6%RR/#4>9 4YVKT#F$3V2B^ Ti`*s# 8ͤ qF|tq@$ GI\δ!5LX!CQ(#bKDR1.0R6.f[jJtD _g~Sʹ)-,Λ'/FV*% CyzTŭ>ZŤpd:ilj\Q5aӇEo븕MT^z=Y~ ~Dېn"e<pjiH] %Vzr? EU+ٴTzT@ٗ>YgM1CNR+`%::1H数?$"B hJXts(Kg+|e%K*.ߞL,XP@~R $^%>b1Ͽ~k|#w`qF&1a!c84~bqxr6tu '& 5vH1Z0Ýo[/g[8c h :BNNqyޗ `Ix[>@[ڿ Ր⾗"tDe^TC@0uA;mvpBhH)JG/o-u3#&Q` 9ִ}'NSFi1Iʘ5Y/f@Zp0^ OZvcepFk6KqGۀUou( T7Eq@Z`ADɨJʠ`6QȪLG >%;A*ښǟI r')12@ƟmfU%/q P'ٱ/= t_ָ Lg0qSo 9 $0 "::4c((%::1S-dku\&?`Ts IGcS %?A*0P$ME%N*..I>C錀Lm ;{Kr]lqYd1XUKۂ4z9+~?3>1kAmC;7׵ý((l\d\-~<4V#3GuT Bvְ }TĴ5銯j~]ETplUejU$>CEן~$ӈTvN\yRZ^[,ܔDˑjRʚjv2ښS.{o?׹o нf yҚA !I+ڛCW^UPRwqMysSE} l:Od%&jդ{|K g=F ,p(6 ?RN,R7t'söBE zȃ[T\U,;ߟljvo$MһL,z9?g;}e qu,u[JX7oI/wS?/(@EJSK겗κ=/b@n韏+_qeWY;`XZr_\ޔ5d!oDkTi5zǻ)\Jk=w,өJ|qJ%ns[MMQ}Ĺh[]NoTnE(Pgޭ>$ӻ5a!oDlJeڔn٘O˳yx2<IA uﮮdlWN Y'׫oo ffHR4ˆS[(O+Ƌoٗ_' aGX%Y]^W|'rxvP:$ox W!쁤FuL-PJ`gA@P]Bva?>Hvh,[LP2;d! R v[q8J9_8ZסJ zdYI~ᰔ{Tu^~6ܲ,|5vg] ie9 գoqys]\VMx=e%.ku%LhjCIϬCa`{/U3%rjfqԂRW,g `VD9"5E CT |T,KUjݢsN13 cOV8Պ#bQ1.Oh# (F.F'cX}35νӮ~S7Y\c=yU~,N$?nKzR$! u-h02 0VXC|;'AI'4f^YFϼe;.ڐFII{kﭦ&^m܌2'-ңzjJf UMW'dlbzszR ̖ qRn؇m8GZ0ǟ?. >Yǫb}ܯJ3=M?iI+FE.Y-)u1>Cmue1b0 ~gxW%e_$~Cn-∺>i,Qcߝ=9DV=gvb {ny $yrc3Jv`'i(: @l$9'TʲȨ5xVgE1Vk0%Է5oJV=X >ڲB,vWwyzq-'c=4m7rZkY0Mսxѥ_~¡ G A#&~Ds.X;G)=q(R?*) F)C3hè3j/>z ,]%u9TvB)X{y7Gz m jLg;xuc`~+MtMu5nBuʠtƾw;g!.x<$xwBޱJč$NNnXGtKNh6RHD1w}~bӳޜjRZ*6!m7Hբ䞞٭Wm햫>Z bM>eܣkq ]#sbDծs #>bDR̈͞Ok]r=#ӎC0Fbg@{XI( E8XR+©ȪT-͹9J|Jz!lhr.\My?SkѨE6'4\{9-UC>_1s> LP}XpPb>)0͑}E`\gٛfk=E(/դ`i}Eha`hǧT1N5:97KNJyrrf`hT?t mP"g,t+*Sr>.W<>l,tBnZ&<ʠtƾw;_bmj쩙#^ Pt𴲌Ս4naSW̶# ]/n䧪e{iRiNDU)؀@vIzPDQrۙaZq #ilt19͙y$PᙷS>Kٛl0\&2'4|j1 r:(i!d "rw\YFYPMrf ߭*1" /EzYaL-:xL{䀤,;|H՘堡}{Jvn,}𷶾~ @s/?N--ڎ#[v?^_>Itً۵o{xqq.~huېxzxjc x{Vj7/prgm5RiI-->2R(fj9Y+5-7Oq9Sŀ|0{/{h\k朗KSHSB7/_`ȉ@ǝzQ'חEzB \V[@ڪB-mÝv=8ŵIitc^NC's`_a 0|Cc`sny"pM}O]X{HaZ2āpӣx8`_-أT8t>j/z9n#UPé&Fσ=4bq sP{Fű2>PNZUB-Na!Dؔj+wVʠtƾw;_:դn+&+MtMG;M:A}Gvim%J6(ىL=  MG *e8a:RVqDZX=MK(M#Bilц3|>]GIPt!ߌrv|5p l]jBјDl$XkF8d޵.DKZm׎}շVfVJTnK'+}V#(9;k嘤 X/G^ J12DZ0:PJ5j)C9Cd^= 'WqR,6`My|26vcd8M+|YȒ=$eYd!*-"-r=jDf`!5;%db:PZ #R$@vW8'!G,lN%5j`QI "pFZ읰 ]J=ڑP3(*Z0.WFfʰdG* @&a; x[w F,=p+9l`P9" `d8\qъBeATJ%'&1]br?\gu\^ w7CA&LxN}XҒtEf ń2qMJ1$[t$>obC3/$S~뙏1A݄R XZNG `g?!qb)Ogr[ HD _ t v97Ѐk[F( !6HJrKJ[:V3 ^K)̋\.17>Ph@xrOt֌0J5OHCj .@び85#L|[29Q}ωĿF$N֌0G}14$j|iDr2pҝ8*a ͞uw$ə,0p,th$9DQ@:|7̛ QL"8UuQcSh#J`D{hbC| ٙt{U c#+=B֗ĉ3D%ATph\vX8mBML+i]o^_]+]4){Ӷ(37~/@fTIdhDRQ47`a 904ٚ-+/<\`=}`yɧP2{>J {d_ճIUKn׬Ydp1XV%X-l"}ٮ8oҸ;WQgq s*|!n2"[̪ev捜NgSRj9(ާŵIl{iԈ(2}@/{ YK9@5#Ap;c,. %:˯O,^ZOVE{ѳd}ً}[Hcپ[hFÞFcH#TY*to|8ՑZZ'Ks3҇ǐJ!9<8::\,%܎ۘ`/??ah݆iH[67{]U߽:=stʟJ(Н)ufRJv;~sC<8iџ< <=nq_8$Zc+?Mvrn2yEAzAWÑ,>8R)HL^b4<P6Yc܁Q醭J={ݿǩMuOW\)̤>)Yة5 YqXsޒ=XDJ&$"|Lܕ>G51JNTɜ9 ї;  8dZtUݥ9zAɀ-xI{$6Y6ݠXqj~ʆXTQDp)z}Qd-m, R&@"1/!ӝ$#nez}> cYIg5ԫyJ;aV}9?gЋD"\CXy[x}Y.g[Jn&/Yy~pq֟a-Жן>oRYY xvv{s+tgY),:{spucUWZ5;㹕d(TW?zޭZr@YKFbkT^SI`wxID ,ӝFThO>盨A9q||cfQs,$[%tT*|(D؋G02ŽxؓK9qK>盨OK-Qh_?f?bQAm9'=KARlo3$|YzMԜƜ賟,ՍT*ѲIWn8|RPރq/S[ k=w.K)t 'XĬ7 Be][EțzdU0ڛWi1Q[ ΂PZe;\iE*"Š]Y%"L*akNPONk!g 2:u L%fl !Sr)bA959Lok33ZP SdY2 edA@(1PsFjnb vVαv;[,eI;{(Fb9{y %sE#SIFhd&P(M rG]Dw Z&Bɚ`^"b=A T\DF\)O@m8 (Z$ Q^ǤY"J-uF>p>d0AרPЁjWvC㠛6ep.CF@Izi¢FÎֱEf RUt*Fg!yYK&v@]{KSNk]lwD"XK !Oj΂l/dH9`2`e 12L[IC~ѿ!U4;~@Bu_.7BE-QSwħ? Qk@C }P~]xy'Q D>bD{!}2Gdl;:$vJfi Q!^;BrnZ!>mGKJpJCb5}!埿]n1mٳL_YrkN "*S.t.:ο NeNcP+V DÓ݌?ܫMیuxztQYs†.,; {N[~'mQodOfIj&~ ,6K{Ԉ=Yv\q#QR0mG8?<'=K s+ܐWq\q,bBD`4Z9kE?Gb6b2܏TЋ,$@"[Fp{45r.$> c2wD3t)Y_oIVjjo' J)טi3e.03v~ O)EIAT.rڔf1~sP[p(W/H謼{sy&X*ͳx mzS{w+Wx669߶ 9pWGJo/g'WׯEC^76kH~M>,FyfG9)9W4 i%ȃ>9Q>p'OH6V#7>-cV]Ӗ8ɨdi}bKi }e]2%ss7C T;kv.( =Fjmpe](^ɇS}.ܲ[TG"HQ0 3&RIIJ_D0K*&.e/\:ɫwMrʐssD9>2:J6EY:ojQCY)و)_$hԽ Z +"`{pUh[ղ]KՆwazoVnk;T6wƣjCdP&7F}gᎡhcí4GkqrR,K3=9k'RrYQ wi4 C_3Z9=8iQ 889BŴNo-;:XLG" ٻHr #JD 0O >%wXպ]=RwS˖Tvc;;3*DQ?؀LŎʛeZǂp`5 h@S WEqBnӁrGDQsx>tnaA=&Y7 L_v'U^y ՍRfK(; &Q-(QnXktK Cl &~KZlGЇ nK]hH-P8S9[d/N؃fiյ9Cm[* =7OIXTVAT閯^*=U !e.p%e\fGf$݋32C *6%dy\Eg޻IpZ?ꟽklGG<\$!{_mT谱 8kE#Fjۦʁnmde+Y&. |fsqQl[o lJ+*lƿaS9kn5Ԉ!ՆQ6t}l eX{_"H3!hq\ƹOY 9gzLoz)l7Pdr_on8b>ɧ ,e7S+/&oE8>aUOZSu[fż YvN>j1LgtzIRD(Ir+$yI'Ev |9C/[ΚwN#MP\VJ¡(w&z;g1ݛ4_)i% Ϩۄ0o{(x|c7$D/^ 4ɲW..5N#.s01ɘ !iOC,BdDr񓃡G;8MJ'v ~h19(7crTi2,8kHS+Ѐinڵ ]-ʴ ei؊*C%BŊL+ЃVlKB@:VOALg!Q5r`duڏHmHeIX^e>9zm,# מ+I`YZwKI#2nD. ,*yxQ3s+$g5'o'iW=?\X=N iQr9Vbdr!J )?g[l4t~:%܌尜$tI SrRB\iP (1jh3q,q*lyQSMj: MNjO7@fɏ&3qumKorxJOX/wc6, Ya=rNPJJɤn6Mm(È6Ii&iK~R7AI N ,j+=3d44{SrA{mDaWCLt܍{(|+ #e_ID83.͓e\# E[\<JR9.N4&et\TP3;u =ڵxG3V4BH6$ct kM,kBhWS u=|`K u5`@Jr]yfYmP o1 9!PMͮnlH῍h 7,j6_ *<H$&~׾t3m&#U6Ҩv&ŒY!(KR  ђOk }Lohd֟ wسj3xM$Y=b YQIVbd!! ;'z>=&4r9iBI8IeXQð@q(&G4e f5ymS J%]/<7,\@>ևbW>߮7o\^ۖZNk ܑsi*F=BɾHɫ JwqiYI-6<rKؤ'J #ٗ-y B[GJ+6-lU(dx>2i3{|:4^MaH O@*DN6^{nYh9@R1o:\t7+}qYv3P ww@S!\fPvF\M|Pb&<-KTG )csɻy3cK`*6i4ppZZXQe*V)fJߢ&+#JLB28R<2 =w$90f3iFv0fr3f#p9ɝ,>g3*s1f ].ۛ@l ItRqe4;NoKL1=i5<9cs)H1' 3V811zz~ĬĉIA 5ČМ$t%L 6+?3:zݏ`ΌuJc<$G V`V1ƕYp(pG68!0I>ͽUJd?Rvؾ{Mo;r[ 9i0uS7酋Eٓ254tjo=q'OA Ӡ]MBO&=5]$Mv+ԉP)_Er(钡lƻfa ‡mgcId긳T3wD7kDkt&M&? 8Z3E1Q̴" +Ƹ@"% ^FDncm/.q4 $X;2(D ðq.03j2YūcV?]zѽsWEߪi1vc' ]YRm1U Ρj+0"tS+*ueM۶CcX5j޹KwH,Jr5ɶEB*T*jDCh۲`]U*ٙdcQNŌ<6OQY"xUe=$=' F%(1ceK Hc;gk%,A(e1EIwK-J# nyED܌3-7{\TwLȻXx3ɘ{ HXe4mxcOft6=Ϥ~㶽ou{r?9bͼv/-*\[K0/]P)Ч{PN?ڏp5z-C?ٚpbV@W^QRXzŭIJ)@N/#(6<>v5o H8t&۲L@g#n9zGTVO?w՞D<KSYIsGϙѫuhKSIDW2N cZ\-I8cHR׌0 i 05BČRفE޳dsvR*֮rmqhSV\uwiruѿNyKڱU/mv0cÖ** ݪICZx0EvkX9 bL!I!(TLE&7j̘aqޯ]2数\cY |ò>Z noa;1_ݟߩ`^WRo<+7B6.+*6#%_PjT\&d",Dg4N TXٖ". ie&\A "dO5bBRsc~0f095RH5B[YZ8QtS n1oش5NZmURQifF{32umMQۺ)ё"Vm TTNZq1ՠc̼!Bu!I2d 7"Ȋ5sCf@R()>2[ Zl=K*!2XR4[>" g(Ku3,[P')SjdIJ)ۧ( B =FTJ~tz$~z]i,QͦJ~S}7\ԏvWš^wo?922BFkVU>ZpJ 5oWWLHڲ52_]M͆?77a*냙* ًC2I/Wй+fT<-/ַ!+ZU q4>R1`7jG7^,Ɔ,ʒo%w@=,*X-c?7D(]ᄍP-j`[֢њ[TJ´yAQ#j1\ߨnI#3!B69A7GN>Z'ʶRUN~Dm >D|_y;+VUwN;D46_)6S%lܧ= &8A/oGL#"ڢBM{zGJE?1DsWOiqp,}%F&.z F3V4Vl$P_&@5r e XƉV"W5X8F^ =L 9p$PB9(;7jG+ILy榨"ȡLk QE^[|Ip;1YS90?AUa=􅏊.V82zc _DZ'+z-{KfG~^]oW).z÷|Jܢ@/4M0uj˾>r%+Y(C+g ]>733-›g8~P1Zj[ZӶ%RIR^^7B7e l*n;yR@ۘ@Q }BP2Bo;R .6$.'PT9>lP*ڔUtd&UΖO_ *kJkG5Qp@85{4:3Gn=-ȩu)-~!. 5S iC$WZ.øܟ tn[r.k:(#;7SL}x'n#C)v0C@ISYcyIe w.s3y}q{hȔKb~|(vMhY_Y'%~?EBh*׾g牥KY\_QI1XI" {l-1Ϭǘt[|sϧSl:.բU;:, (GnbqRy-WE _:v;_xR^ r^NǪd]/JIX!TJXp`k)ljp@DSmx`>^W! JUbݸ9O k'T)o3ӻ(iY>=p k>Rv J}}oar¸| BTtQ_8JnROCk >$*lmQ-z;i5Q9!*ۃ_bjSW .lͲ ,PE`C{adp$Il+MlS{[ɱ #_":Y,Q'FSwgT~VHGAݍgĔy\Un ȋ#L}*3~;lv[DGR$\kۓfja?WHs*:aɄƖWrZFHu&%`t d0XDdFF9ɣ)=O |oP3vͥOHT&qa d/k@L$=P<(aDHT@LQDS=PS-a$!Ջ-e/ۿ.olJb#y!+gCP)CCk/}g hhMՙ|bр#;sɄl\¡/cʼntDsX.ؔ?(h34*Ҡ@ Z ]q%dNo~ɃIXD.ϙ>,~kBe¶xJ!kGP aF#֫:-b'>&UKz>0Au@RUM 3z&q[Y!ϔV' l8m(P27*,L1|sBX"㝌%L9(!p\()xGt(O^Lւ8f&J bl5$6v>% {6/"豁Js<w[TO_~Xp՞T%a^ ceiUƪXiI\m IX2R GT.츒:gk1=5 M84f97:+#;knAr*bAư@jN|W> $o+MEJwV:4G Xג\u2w :M.aBt<_qbKxD "c9ĭF'NƇ۹.tU7Ғ™G =?wzݏ/oOmɿVm|r[M,)/n&y=L]"~K%fXj9gV%OKfT1YoS1 9@ך%OT?ގۇy8CtATq]sVHyuXKS۵mϷ궁(FJfVWjf_u>./juBw=r~#i+ԛn]x7v/PeBB]<ݺna%K9VT}upMJp];[~~v4[Om7N6 \ ?V??eP9%"{ :v*Pd{+`;g0 tix u q;3ᶻ>yu,LJE*w*^cgqgvI6otC=M4f:Ne MSPu疻}^CCh\~rWa$ot%W"q! xhaR;JWr:_/ ת>KCH&L! -}|P-}B3kƔA/YnH"0>1Cv>]7v$KQK)f]ck 7U#7Śb-Ռ:q;C^[od kJz28bg{Ŭ7CPw]GE

D,ۿƟ<1ybĴe]`!fNm$wSRRc.Hy>!X69S)tp6F [ hTtBLngE$ۨ.Ej^9_RZ0VY/|go4ۛ7.Jg<1 -\a%&>oO˱yM79-Yr 2OWcsOMs)"'= /#*R󯋜)z(`ʐ1FQ(Rzb te@ wjø^WBN$YP,7)db:Ou Z2fwK\>=p ;*}>}ɷx$ DQ ?Q}oz'y8*׀0ͿOƳ)p=6B v2mmj/+,F;uC{,ON F(I*T%_F (p0J FagC$)׆~Jn2Yjڙ_7;;!Ԗ>^ \WRjRMZx`BIz/6DU. }z bw=qe ⶹ~m `QMzXh+2BKU1=g H)Qh2Op-RUHXoRP CxHw&ޱذ lU,=nI;Vh/f *9m㸁@]s}|"B#٘+_wRLΈ{\ P ;75  )7RE-ʏ'@1"`^7o1r9>9Cʀa1A9pF= K$)SdT#BrXJ`gk[(-䌤o< &ajňJd- 2Z$ ck.e+d"M-Ki>H@td?wοEHP4[Z !ܮIOr\ʮw4|X-@i" yVX 2T)S[ bUT ҈J4TfU9 8) {1B@Ŗd$YI#YQ.?X+lOʡazn$aEQ+>Rpٔ9_ME 6- .8S /XgTsGQQX#yiؚt/BU5j NHi,ZlȸrRFPa$3!&OU(4QP* ]^(K;bWwSwd qy,"@Y"T#C[)zM2?A՗(||Xj&Mg5PG!WJ^K0y(ì"vVG1sJcKE 1^qGY@OHIFQ)kBXfOAwA83N1lHLBQ=aXͭF oaQ1P.E\:|J*lH:4rOu X1cyꁥT#IHˇ!uƖÐ}IÐ5>4Ds%TkUGjz؈SǨ ֭Y& ##$*mH ձGH~3 GRX8$XI69R8@Na\xc/=h({%DN3@/ldH-)d7#Ğ'zuKE91a{<O s¢x#]\tSGP#"_mI_{v%>`dd> D*M/_Qvն&EIVҝ-KUS*b.rF5i^+^WO< \m;U /$:WUʔ FE˿O`\aV|7` 6JIxDa5$w'#aDtDͻ{()(qc)W`dž_=:ŐWt)>DF\䲙\BkIqSa|L2 т3>]pa}F!DRıD$᭽)@O8p%S U? GA@'ToR}'gi<*o}׸씾^^\>m\EpDxX.wZ-!#kbPX~:Pry`0(4L q0Yiep6Efƅ;|5*2a G#9>QG>{!24*"r $)`:ŚI)Ar&(.JS?A'"j g&q1ޡ[{S0A?Oh*3?Wfw7ru6?4}-WVwnZ]֫?±6/=la?rx e{|χA8ȴCz z1\&`;$=맒&dLSBBo3[Q pm/崥!:NOrFB7<^' RN`XUdj}׳7 HquMd0No[ɮ e !?U_>ZgWNJu͚ ]~~CHrBV=x3W7'ط!`frߺ|CN.twH rTl6bpY@&]Ɛz{^G]Fגvh+chxGx{`%URuHS^>$;Jy=owtH<=2/X"Gr~mY}~MĆc3ՒNQb:kt ON8sD B$m%:~FAVG-. S]mK`-vjE!C4PI<|r »d_:'}p1 ^hn<]6si:0~eo+p ligI\ M.Dv>^]+O/~x_.g^o@5%vFJ^erVk]+\1 nWn:0tD]M\/xFzt8xe|1;,]ۄ`LB;M3 h% uo/gٕ+eM9?NSdeO;15ZMKJN YH{䅬g/ŧI1W byl MȾRsVb`20ccؕ^pk0aTNLA%YMhnܐ:8BZcJG 9ǼǞ:6^C#>qO}Xz%|;#x$k+~r6Ӳy쓶E֐~Pc۟G?v,{?xם鋓GGgTT"v< mh{tF$Ҏ;=) RV2H͐V:U_c}dd0b=%#bB H+\X:ek;H9@5Ǭ,Y[i1%- GMn^\\$w($NPKmdRM RW45Bf-傰\gD'S`iWjm.,AJ!{6pX,<#YVu9,رqi414JMC q؎.[4jp2uETw/BLJ0)uxb!)>,+ՔH|KZ'&tN) LEBWGk)~p?TT͔ޔNF&)Ҏ46ΌDjU!!߮L,.?(f _7|g rNU@HA kZ-Tgwmm[[&cev~̽ƧD<0MܛW^\;bTy-Q&{\_;Fj WzDdՊz :GK B`wtGj5RZO'ď.nF@SOD.'x5#̿{M'Zn!#]؉ӄ2-Q[B;p0`R 21P +*g/sP\6v]v]Bc}JGԚ*g ]`s& ble[mF39-,JI*x3 NRK !pheD,/\c\fX0?I^L'EjM:ٵy{(h^旳 &1z_jNjE|_llpw bqmrA ֗Y#h27+)/8Owͯ:~fh!kA3v_,?hWN>clݮ­0"w_RøW1zv]?UMKKM4Ŧ$ƒۃ3zT BL'1|p*ͻ%+ڰ'n)6|Dd`ѻbb:ψn盥xܵwKV4Իa!ODbwv "UZ JPC);lx5dx[ȷ.?LYo6U5TVa&{ ,D 3 <&V6v\_[[bg&wh+-QM>YQS'=߻S#ƻs[ݒ nmX7=6ED M|ܬ/K?'9r[vt\ݯn/ȣ÷fr|,.f~=!h%bGeN?|)gS4ݮQ&"֬*m| CN91[&VFmL3F%^zGSR09 ]ydQWG 3$%ʣO`.}n><>oz1}gT)9_fdS/ߙ@$ƭAF#Sg\۟L()t*.<3yFkfQ[k"#6􏈃P6Nj'I}'g_aU6r5 L0KQq9.r2y Rk*γ#.xȆl "-7QPBhư\B[*srcf9kfS@HNO,ŕS"8LMc cÑ=hϘ+\v,e{ VXeQ]\( Fv=0ϖ,'J+2Q_~xݩzY^zu^]V0^n_\\`Z]|vjoٙ+9  p^NnIvc^,ngd/P Tn?u/wԸZfg0eD0$&,ta#$ KEwEj[Vmd`f5{\6 dj;#<ëWQub5+ :Rߊ1a_ O\S~xmzF{o+{k-o5ѕA^ޏz.l"zr94.WΤ!%%zVRsfD -UZUWne)IYJ8 %-)\>LK\JT5=RUp`)PB ?[f@,Yޒ6bQc?0*%*=m߫,VYհ#Tz@FU9䚬jdmmi!vAf;:GWYGg݊Ї~lƢО2$O43^g[̊~`qs_as4x=6|VLNx^ ,(Ely]Q+G!)3z6/H] (Qi蜖ƉAME:"=ETTCSU-E?EO}j`J:I(Rq쯚!ݢ3?Ig['a/pV;?IBqD ; $vwhʨb)rCj"RqnM,H+ )lf/1)f>o§;VG){ɒh!g$7Qe4m2I06V2µ˥ckDf2f(4Zg3(H0gՕ204|ٻ<+~&o{Ջџ J\պ¬Vnev=1s b*veŬ1H f.g1T WVZ\Hg lہEC824Ɋ1pDw{[7]ΦLJTF>`YxTT!; ̐)pAe,$ȑ0&VKo?^t_?ܱCd&~{Ȯ7S·J\Vt & ~u(0I~p~];rL'e 518rezS(%= !H{c,W:GG0~b1ۿ&S*֊f|zR܏N9Պ x/MMݔ pyഁSy]|;V4c*%+ETG_'S$$)0:/09jHu.J~` / fMN}?{ܸ/ٓjpj*;:u65Sq&/;\ zȒ"JNa,Q2%"AQ63)X߇FhtrVW.n)V5EC,nkfWbϩ PCKFJ!$HV}֧@:&{T5_ Mۛr(fsSԟwSTV$aiZ"!z;.nTԡ6]Yr?]jF0zbPgAg7+X_3@FZN =Z[;8*6KM@ErJY=eQ7:X!Y̼1kdNo^R[sOfۘV 6Sg *9PɅ@=_U{?ON@yGU^SlZi5]+Mk{$NOjc Z#o2AZ5橡q RAhʱ1|]`zhbg%&5^xhjE^<äs_2E*pZ]&럱zO-qFwԕ@1խ5,Z֚&o`D=y^A$AQ!CEi0Q0#ơ+@H|L5}$BԽa_6B]zNwm]پ]cΩ~iQ4/*8=guH]oxIWSVįFRXmkyNJu+bv/tϕUf(DgUoB $C+CR8"$ 6ByJnEs.W{D_@1*hcfv gK:}w$.?3 BSiջ2a㬺@<D?jm!W+u$d[В|6VI: :[GO1 Uβ#~R3^(Uى@OK"^umRݴ1BH sY4kh%$q€%Eer 2Ua!坻Z7RKxǭN[ <_%0{ sNp[qroEcpjyS_e{79$R h+ yM0 yu{;*콕\Ц{WJI=AՏcᓄ6m{7Dܘ#n~%aܩ;(;2ƨh0mZuٷc]RQiHTD$: Kd0ADP7(P2A ؋0J*B 2 K8H$m&*aq@DNPȅ&  R%꬟䲚h A(l[!klHXr=7Kb.*wˏc:HL(n%3 C !p w*A V<0$T3S2>K؍'iHB\ι+){Bl:b5,cG ďh<ׇ_NV|kM∠ hUP6m0-0ecD4j7ԣ4{Td44PH+(EČDF(bF&IL$X0@!ZUbU^ah<+Q3,@)fkqc(v3Y`Uﮕ>fbAmr=m,p4şӹq$`6 5a50Hf淫d_2of23K~̷*j-cohi #HygXlD8Q  ;NBHnRrWT#Mn<9&?Ш:rD"wj Hdf 0a@F,!<b%*C!t I]W'lz}:.GR3gs vWj =K/8-,K c`aowFR(X H;CRJ meI7"iuݕгYJK X i'XzH]q?_:K)uc)*ю4ޖ^4K쉬fO\vJjD {^"Ktc)RI'X K3YKq旲bV37fR#cL{^4K tc)oNy7FCRJm,h=|P;RPf)N~.K k:caow&YK,=$,DbRN37fR#YJKHBx'XJK>Yʮӈo! &ݕyRø,܍~Ŀl"RE֟H{/n1Ql%$KbVR#QpѳYJK)ɬ`)%n,ͤF6DҋftdbL)'TuJ T&ŧ>l*'~ g{Ј }uR ) ,Myx9|ݒUϝWGXo+k*.6Ļ5i7m'\ش;%CG!P",U[f<Ŭ1rw3Wwus]ɮ0e>I~a~(U:CD lխcZ.!8ZiQđ]Fuh vE0%t^^,7Wmd..сr}نǵ-M/KJ$cWRK>Z{UkE^uDW ]/޴)խ7Z->C6[5btIn<.E(a.𤻻=u[WqS ޒpoa=eH,vonOLaxao_H&z#-)  cԛҊy 8Bض?Mb aևOU&V i(8{afl,i] 03\FK )T h /8ظ͹y֝Hx;?] q0 lzwqO`}ZB OCRVUF-9)>4`6VmT7Nk1ߍJkCD-ǦN+>\-%A2Zbc?8yl pYrDr9Ptuf fѓyL6X-М.;gsv"tZ;;(f]tlnqOn·@ PʀpIEDHb)%nՔ\tlc[AQ-~W 7'=<.b9XX6N5oUI*hշb]hSP* z U)qh;Npyfqi渡ͯ{T =s oHg$ɛhDŽϟ7I B<C'-@(IF!<>a?{ENvyAMbd5^t۽@H0l Oo-=%Hb3h`)&FmODWǦ@Q~F 2,!Y:A/A6 M0tM~#b\ VHœVȱ1hpsG 8ŞO8?gu5;i@riݎQLlĄW pUr._'Nj% :MICH$KDQ*4*qx}"+'L$)ebA !i@(`  G1 aB"Md}JHv4-g$v+!eO~ S=2v̚Ճx[5lnt?%T0[t{ *D#>u/GqfEw`|  OX{2C n<; M(g^ *]* p tymX lJ>X|5S-޲"jEx>^HQ8rQ:[..q71dqX}9׃Q.sS8\wNh}}[z=/Wffv Q׿aNX&0? {{(x(|h>uL|6[d=np[&b.ZnIBH:1Յa]k6g_kp3Ioz3W$r$CA`]qes`'jƩɋ'#ޘ 3`˗ ;aʗF0fq4s 1\oDЬ(*T< 5'=|k!ߩ;,F!h Hb+‘[d "J(PLՆE% іP2dlmIoK6Щn9|>oXcomwxo MҴ|F̀ (H(A0 BD : # i^&P]?(sY"u b4C/uAc#!_ ;UMyf.I|qC~uVVt#d`UV8n t(6uڈ1up6enGovk(;*5S*5Pƅ"] Nt {%UGرf/o?g%~~=Ժ~~T7ȳ|:XKQ]=֩?ϱ+Cɖpֱ{q~UmcU}E9 R6gED㬽է{[\F VWCT0;GO(ilM4DTҒacZvyi$PAUPAg@ zJx>)et)`q_9 εtGC@0BN{=tO Tv]íJ)yJ7Y{5-1AthE}!:d<@>tPccmh$;9$_1ZQi=ޢçw*MG+qE1%%8n`̴q n')! %h.gBz@ѕNχ&̌1gK.cHHE1@7b"uuf}DyJcIAL?Lnk,Ű<"^Mg-xIAC5=Uك'{:씒i4ktVz ZOǘ~b3;YUh=PzV{y:ġ1'~ EK :i؎KɢNi\:O8B*3'tU=PHn%~:U _bbZT16svf>L309rӣ=(׵uFq0*䅀:>5>udmg5oÔTin<8 ֧lD\cQ#EC)k!S7+RRZ[ @+9d`R;fniRy/S y/Քҩzmm*NEy/ 5ouVW>\ DAs."vպmym2J3G[,vY%i̔x6x^ym< t`C[(fءOKHaM+4T) wI3 B>i.X89T(l9df´*t3#SkJ4qB) y` W)`r=p5ZRյ}K!D/|{{ag0a tֲ2pV,m ht(Yal*dQ6"&a,? 쌯 U*UJm(Ј#0W&"͒m]1:O|F+@3F]t&zfA`"V h}ZhmThP2VAD!4M{T_'n7d)IMr,@Xhr"K"MkaF˙@<5dPhuy4,$irGd|14̏5y%[J_r"Ox"ӽ՗ostu;h혮ݓ{ iU3QvFWU9s\?Q¹m@{kKprbh1t,rvX9E$tcEԎw<vg:'Kj Q[媰h3[JAawP=e~(9yKeXR!=?)Lh>1[E2XAo_&SUxmjë>|Ec"ȿ\y )ZJ#OaZarҚ; %+Mru¢7lUe D^F5XQmI7EHص2v/Ό,uF.GCHy]K2BdFomP.VN)!K;cf`ۗ6_vEO&,Uu\[`<8֍J;LEJU,jhS2C$ Z-vf&=0QRõl?l,=zvzu& ?K\k+e~1|q0g5=wI $Å0NKbmj7_wroa (U:[:[ 9:[ą}qFy.S2*VR)`yg?Z jciv(Tw.[{U:་6ޅh`ekߓi)EX}1dE{;Nj-](B^a'=$rD &b<3td!HRU-piU)QbEO"B][怌uㅗu?6B╧pys9]:cD@jT0.?A~^*lyl`O0#`0OƮ#$gŻ_.ǯ\D_nb?fEefW}?]^_>/W?sKfCՇoa|{Te<3r{>W$JFTbMljUWɣRX4y㿯n>Ə n ΀bi,L Y}b n6{K4J*l kD<'s!=zp _AJZqP*a`pJf&s&]v󈉒&A(čuպ26Qdcl4M/Mr?poZFqoRs~)ʫ%,1Q,`L. -7PZ 81 EdέVRo۱,nFOF≏lutl@&ak- !7^=(8pl df$@=Ulvs箠BAlv΅2 %ga 14_(rg껌:RW63Qq"j?;E=j \dQY=upxǶ6Өw^wt"ZKC9o볰tv{#9t_H/o_->7/BLgyuod9篛˅[;sלzRpO;ޤ+(D7$НK+]B͗Er/|WQ#%#Oa %Fq=(u"pյ$jnHJSVJ6+h +,'}8ހg317ZsгîU< kЀ:"3TQ(fVSr8ߎ[+g61!x)cTҘ`ӈ 6QR bmr툀GLgd #օ:M$R2o䙆,*-wcQ%_)?(!T\)x~y& ^kߥLhm4@l776W~ _O%e!Za[s\] Z뇦ɿ)-*h)@u΍ejDkkᛗz︻ c^/=O ^vKC Ȣhq>`,rv"-rЎPB ^*l5Y@\ N{C'FU*s*e"(ESk drj 44UR}ր3HZ`ÆbFAeh** U#s'e=6Y=]0T` ihY!( dUj@VJ@/U3)^fF ba1.p?UZG$w΋J8$7Ц QP-jHڀ3adJunV1@-F$0$13 ?(mcES -xdB _iUp\| .0Žr7۽B`NL|o>R7?((YEX( )כ'bFKJVvL:/(b쌑AIN+cﶇJ` Lg8Ymed0#^0|zktYZ;h'­)3`TZ䠽ljHXsdƂ̐UY$ߒO0R'<$%^niZy( +]`ny8b$ZUpF*X&]NfY_`![]`|vWշGgq0z*XĨNg|(3V ݨ}8pG;Vwru>H"BNk~&,9sx!n$'̬2 }Fvxwi(ZQ;d ɒoZ#J]3VRMĝ:=jK.ޯOM7 |t{GY+iTچ ߊݓ݊p$S~nh [W rD;JtLԌF{͒gJ6"̇&;} 0ƳA:hz+d{bդJ)%Xd[22E[OEPEtʾv;*4snUH_\DWd1*lkN1Նz*Δ@;:1ٓ(y޹QInNty†(#l/ۣxOWBo*XW^Md GyL)XG{c ؝4WfeTHtl@ !ȑRIf"<* #єF4\ f$PĊA?̖?^A&` G{7o=o4ce ]{p&IC(H\g@zD=Dl![)M&ʧ-]^t@7**ͪȴ1L7?1xh+-F?VRƠ :~5iWtAIb%S9 Ǜ z/F6UڽYfp73d=lkS3`Hx,S0a$RCW ]}&tY` p yӉ˶]I,%={|KvTO02jNƉ7 US .N{~I -7ck+K.Xr/cv![m{YkԺ} Ȓ$^Z(;*Yɑץǰ%#*`^u͗ǹO"ˎ;-*iG *hFˬQ)`S("Β菠`IusQ^UM!UW] <1,vxa+[̊\9όgVl*t5KG]X|gK*[!O"۷sW^\]3O'x *C795\7Nf#؋G3 އ_>|(E*R-sPGGn׌n!+pl;hHF!k3Nx;G8!}ܵK5+XQB,`S> bYlYXZrq.ONk,RrlqcӉ_qpx<}1L1!G;߽zlkj|+;,Al3|ddOZ{ N. Zݕ3vVAq2 7*x)qC<3A_7=ʉ+qaA {yicwbη":II B& bAddMCb-WaטmY %r; Ips펪kw0hP[j#>_M_z0fSA'xݭ=l~q xeG/cvo.94s}h cSZn:F6qnv_DX>))s3:~Q.;3.cqs-55'y˶/z= ]\l< 즌φg`|9 `}N}O` ޤFor?̿AiԗѾsmr6t%zKB*&ۺ +qS%.,WQXNt=!ggs.%eAŜUm|.k8u1_.>b?JzDj̮](J㘾<é㲕| b )o x68P@4=z,3w'ިkQQ>qlApR9޿_ق>8:"Tc}\7ro`1^9#Ki~v;T/&0LE86I3$ՙZ8qz @ζ1@qd[%?'w\jj:t%LݮݗiXc9<;mdsc~,M ^Y:.-tA*!J GWioB[\s݄Ur;yrVI0Jx%3L_6Gf [w9q٪iHR8Џ>uު?:Q68v~5<:I2IGѿod*ے_.5N`]f_`2]kG΀[`-0y`q6([xˎvc'ڼVC&}KӇ ,2FД ؗO:aUI>مy0b.j:Ƀ&=tw(͗hܠ2Ȋ_-{8!hfP DrNjXˡ.* /D5!q82!44Q`(v! 4V$b rfPQQ]콧M%nƿt6#j3#rֆ'ХCVu;˶k#NV7`q$MFHHD bL9JLul++ L5FLWUi&3`x'kr>mяն_i)įiV4 8S޼x9 BH*;r|EF0VsnG&-1}ADx{J-zMl) ^e^uRbzqww+S?ّXsG> @lXyd~ٞ⹎UdX.O:gH Ai{b ϘKt TWVqQCΟV=%״D|FGxLQ@( [y,jrd(&CL'4hƶ\0έl+4J+ & 0%#K8P"1_( qAIM2<:bKیj=ӪLeZudt{&:vU^ym/>嗀Dkkxk% nz}qJ-w g [㏞>ׯ*N@dlF{ MqgyxLv2|nb߿vUw~V;o~w-KqN]ik3L@d2Mڴ`*2TW1mdl9*Պ7 9JVoؖg s{JjnRMY$$/A0GpX\{P lwtYgZ@^$l;drJV0T4?{8Vmp~x~(AhNŦN,uoMv;v}⤾ϔr,5Cܴa+Z&4^ #bF6ΞJrs1/A6} \z @߃cN*Մ8K J۩#"=u裐Rܤ4ڦWp7)%<GG!ۨ[v֥-TI)mCR&)ՔyғRݤ6Хۨ{B&,',1ڶ3>'ǰzN*3zғR,ݤRK9b&) ӖRݤd;j8Rݤ4P~O[Jvl6B=85> )'Tgj{q_ vQ$zi? vn3;iJJNr}'\崁AOUH}HN2Q)rziyA1E@Ek?|OkڃLsFhEX] F 'z {k=[P- {F'}AU!"7$rM I#T<0Yb ׹΅VJ(FuZ_Y6@KD>لGҊJߴ$җSA<,MNJD`Lej&K[Jr"k"!Ҵ\%ՌDB;cQH)4)]P-O-\JLRUqH)iRJuʛ>f)ՉUs1_N?-knI5SKiZ\y7Cb8A-E6udn@S`%J5YM)<՚PʤZ $x|nB8J*b)=ׅu;5`@^f4gt g S ع7:Up>o7JͿe ^}sƄѪ_/@0_^ [ ekYY7j̼dWZtv}rR뗥iޖVFz)~̌J|&n:u)1Udt_︽nP/;1bLc,k@v);_1f3.7L鷁̹ w "|:+F1{%#t YZq"I")Rڼ|ycw̺=2 k3:[Jx?)_}s4H:Tzؿ\n'M=!)q]FPm%+R]w`Eӛ[Sh  Gݝz~pw24U2ѥKyw;GC#$tUڠ7G3:o\2WPݚ9I^tF=iL+#MzH+kK-=[csh7^` T]'\pܒa@z݊v_HUzRVDJH#uDôέr^%MHsF̩qyF- <ohIJ@ W^0Y8M'+'Fh1VeySk#b:!ӟL8QRZ }}Yx&[~Y]KڇߗmJS|׹̃Ota{\ϷNk1m+3~>$ ~uApfAqiV[ԙ6zv0U +@[@,G5M;T%X8|s,v=4z)qU GaX f!@y: Zk}ݯjvZ]AQ7Uu?vF~$yqnzEmW)C"]oU(<2NU[a3&+ ~ SZ Y=.|M}p5 XIHXk>'|Nl8E&7% j _X5V,fXg+P^0{.Op(8.+-{L [.[\Fv@k@>jZ9eX8.(a:]h/Y>O8\} <($?/V#3lBQ+&@IʒK)˂Zrp2V@QH *XJg)IX B"Wc^J"3!LP@A)AuAV6\\Z֥z|=:SbEOMqkuHɦ_Dq Z>+0u~9<6ͪ;Ÿ`op~ooo.l 68?5_U^Me\ 3E\@/Y]+kxUI8Q3?%m2Mx0;Ѵsq>,e5ƺx$}{Y\V{QCbE?$B׍90σDۗG 4G)|TL<,{l;Ll)CWKo7n Qeܷ0a yl B9UCyWOZ!6HO؜TX2qW_ЅyaU\pIi wW7D(8DIuCz:(𼃚);fS11:L8!br y!ua)#6Kkmrҁ=<|L*Iܨ@+  !JjFQ$5*)9^hUȂj U1 v\j7Lrs1Ҍ|yven/8tQd>}o(лNBTDoMޝslE|yaCKcz?N[q kXD${,SL'$)>JI7^)9*DĤȔ.dذg^dn0b m?\D\;Ўw/H4u:-X*3_iu!"ؓrZ0`W;:$;πsqQ{7ǃ+c*瀶 ?f`eD{+'**ހ"Q>D""f(% ԮCS W I 'ֲy$: T*A~LDN*L Zw;@pv9d" oCKZ{#Z*5֤B liH˝ƕ8XC%54jr%j!+zcIctv^=S-+F'=b՝4KkAAILgW1\Ǹwqdt1@D8Pm{|sX'`xH%Λ ja"sꠓ`o pC=`:nAs\z"sɀkonJsRR/7c"RhLAS T(sQX0VyʣR i94!8d+d֐"@U37@2N3bEln須 |? wZpTʔf{RpEAh!G+@ЬAI l3'ē `%Éd FSI M V02F)A$ ʈj\ʆ1bDk DW 5Z.$ZPMácK ʰ]\?O/~RTܠZUXZF2 J-4yN1k:}>}iR*̆mhKu>]l̓Cq3PiAwjnV~lXp$yĴ,4uV-æn8 z **m&BzWn^rGБ|"J*9-cݩ7[[[Nw4nJڭ}6nvBBsm#SP(0VsSI6 ֖)M!OR[#ڭ E&S0Iڈ.J='8}u&J9zq 3/.R1[]@FV*ӆC'jHuu^5z -d~]]H*] *(B}&%K=ս( t9~FI_K_Q +s}?{5')>Es t H4,FeSE{t9sz3$K1t~A<\eԹ$ t IS O{ڃhS~:jJ=X7 7iG@&a#?뵗‹o{Z)ݔwRL[k {D\GƼb%0س`}[5qs؝ߗI%%_'v:U*}$E"zZw67(r18E2s^ G8>AaI1;)mT˫.>P'q/vbj,988͕8\UVuiti1Z xuwK&wU8^W(qN8M$\n8( O=XMB~~&ݱx\3fm8L#N.x2ag66g(vkD1]F("7g>k[Z"15՘ֳ9*4*QzAWoj+w6 AT7ڍ AGn4gnG4fZngڭEI,?yv; 4gnG"p__n=$o.;7\A 5#l K *Lq!͠MIMV먼5Z۴Xg6t\"EyvNCEQ6_loɕ?Zgw$ 6}],7O&5`'|"q+VB{ gyEQ+%י"aG kvkQF75LtaT q(4U:Ƶ9WxBǹ Ɂ%(Vj]ETH.sB^ w4ۚr}Yݾ|aVu S_K({vB8X)B{qA[=euvitʎQ){2AőY5ݎ-j>^nţxUk pr]m_эGuj㣚/A4SwBʡ.(pٳV5j (68;K^6)lp8xλyNF+#|/ڍ#Kfu:kdSɱIs0,] f(Mq6e^Td۠Qn%O"h4d3^v]ZnVs裻.2P1:OVc[뱟ưaZ h)=TwS fחx;K\LR 'f56^iIj@{yiywV+mQ2g$SΔF%#I.Ys5t{Jt+%jEAU :1d}Qt3ق3,1RFg 0-GU#Vڲyr+t=VfG!( t F7AgS ߶V%TӬ*AδQC$cp~Qr=.cattV˗/tW\v7OSRvO9辸7q /"^O~ge 58A}3p5F5g?ubpx;*0'6녱K FO5mΉ0N!|FXywl=v]Qf` 6Z='P?<+(SRaKЃ*Lj/`ۻ&l7JN6GzZf9OB1c]"5K9N:^NA(^(g&erff,bF3 0D<ǐ`حWӈ]ތxR槣ߋ'/ջE, I\h`i6;VT''k$rZ&$j\zyۗ Nk^ T1–De!8 |v jqPBZ3Db;ux#X'HrM1TiUήU]|pG`bxus}ybZFF4AýU#S}W3JQOZ=Y=MCn ]Nf׌Rt:u:GY>^wNUy: L>K$?q.?dӅz}^+vԻ[?C3YyVtz+D91V,&!& bg9H mD MŲDPAdi]O-7)ɤ# R&D#3`l>W T7@mi yDr}ga7:*djtNafM@DBe(Fi\@{p0]!^.X%ڍ7o%m<\]/q [*3DǼJP:3p+e4\ھ i6DGihAK4J@7l6y2# [7' c议] Nq_Gg m_b̀*-[%^;ТM`47UD eãSfc Q"("3}9=IʴS)n`j#zYæ) 7A\JJJu&kŦ(c E7\e1:v녟/n{Ga8 ոVJzQE$l(ZbI$IC"ՍJb-xtO%:D9 ͔% DJ4\#$@<Jtf XJQԦ"0XO`V|%Ntvͳ>f>uA'8X#X&+@&GoRfXج *$xvKxU\5 *ThW4e4%yt )⨠D haN6`aӭȱ'v^u|NB]7W_BL)OP < ) *[)TAvֻ~NV@wnhIK# BD7sW7r+m@Z);/ Q1fd6L9dϼg%cBc:R#<ܾ2g,׫|ի2-oga@P= 8#"<ژAj坭EOM8J/49ǃDSs<2Cm<1l+U U:\[GJFm@r9͸0n Qe00ɏ_?FCe ^-tO%[*?JPMITN8) qJ=&NTivܽMIV Gܞ+um >PP[zZTPN 𺳁(bExp($8e푧JǹL.->CK+@ewtZԧp}u+.tÁ}z@dѤg,pœ*ށ5kmfBwډ@@^K3>, 5,W>5GR^R˕wZ+W~T(Z7 ZcB-J!DžT{뽖Wƹp5C>#fDk++)H4-~ LNjn#ݰTT6WݯH!Q߽r3a xm^E{jX vX9iȅ%+7O[Rg#3hKmo{~ jmx#yw0vჶ3S lq<\1v⓻ZCM)'y"*13eؾ oV]ԡdzFI  "^R"@bP@( &oNPA5H? R|Rл:J?lNPQfYU yBSִL%1RwZFYNĄL:KnNJѧm0Ƒ;y?[^~OWt%e&fU 89e*docF19S0>w:_7o&+3 LHҗrwZ: G.idY=2a];4Idi %vJkRwSA1.4ƌ@TUfs2HW}ژOKG(ހQzy˜QQ&Ju'eQޕߋԔmkc6 S O[5SihQ/OL1@''{mir,jԏں Jmk5twvV!\mI;@7଒pTFn ( zjp*Y@Ӓ5ȒYw$bVnR?fJ(%?{WȑJA/8a/vXkB-)CR}#($RUnۀMI/"Ȍ=c2?Ɠq?!Y`mx'rR Iqb fO ՕzN! ˬ˛2JvRh|qZ&,g #qUNO(&"Y},w a}E2 [((ejQpR4|ChTtj9 [N5 6t e9#K[23^f>m-D=^|WqQѽ9cC8!L9FF`)q2 fߊ. /E;q9,\|dK"yaق,{|ىGX{ʈҸ2 L|P̨Q"n9 vѝmr} d1s_ˌ3J4;t8-Tߧ ]oK9Qڨ{#{AG{\CE BѓQL辯Dl2ؕM:hVgk+&ÄUKԀn8_ErJ-#KxvDB%xxfc-fXS7J|j;>^Z88\9Fy{]̹3s`:I28B**89/=Jx+,5#?q 0{ ŽYn7&(L>1.jH|IȘ8e8 a_jƪII4/7F%cJM %xlƦ](eq#*3`u\SDVګoaY' ,QnG"O_.;՟~#Dfqn/+`Y{&ӌ!O +* bdux` D2'Dc*`8~TLVXʄO(-4O)Q [փ^ûvPn[äe0 ĨkXpzg W^}zGqݝS/ [@Tuk*}J*zaR?L$,7тէ@Wt!aPݛ)S Q}Uv`Tr`O=A:﵏ƈt|Z7b j(Bܢ?걬$2x9j$>>࠺ 6C@DY$U-ifMawyͩ!8 ^Kirr1].NŻ|Z}r|L`heqahGOߨvz|YbZZ5$&ie-uœ+2]?e5Kr-dHbrB@s9LsZ&J3ڑW)Η#Ċ3GNh&BannϠ$f?rY/*ȋIv\ OgC4Ͼ0<$bbb!r& !%%DG"7]~zTMq9 kk߉grML7s P`+O}9\xI,5õ!݁h;畄>,5?VJw$~D-?M AnwgoO-tW@]Ք|>ح2ۂf+lf줋:!-՞%@9E$j m#ڷ316Q91˜XN`Rb8d5Z[ N8R?eI5tߺ.ӯ;z}yC>ݠ;o- UlkUw5U0kk:>UUU&ؾAyGrߐ?cCV*7h_¬DB'^AuK&NzO@rg;F_vemij\}o|}WGq }S#s~%|swnqL[}(퍜[gi<T娦^0=\?X TQ5 .*(oOBꙇ}/M[!gyQosg "R0RI΍h[A_oRn)9[b(a՟P6:Y-5md1n|g CQ[Qr(ZXSP|7G&wmU?kNjdۻEM΋Վ|ILjl󇜔tZdq{O~"2ڼAΑww/y&$&q^ \`Ci5GNֽY*Kݞb ty }ZzV`EuIOhcC`[W ڈNwl^<ٴ[k-[h'=ЃݺbFtcrQOi[—-[zxmgT.,[-aS9>dG|RVn$qFWdzIs,>@rwc7˼X ƲXSq-q$d% .l1Zյ߿y)қ۝;z_xZ$j@6sY+#).vmV=i1MϪyq"JS`S@c⤵:1c K-A@)υ mV=u3v3|P*|a/bjCݶׅa$c]l,! UGMje3' \D 0-1raK΄oe[A xw2-4PgR8P,FfTHh΂Uɘ o@$R?Jp%!¨2dR `ʕ8ں[`(Z| *-#ڊhep)!25TRbI'HWR:ј\7%rU;6TIl"rЁ6JrNn[xPg R`Y뙋J!pgQ}VkѨ]7(c'rN8P"tϽFmÂpawQbW`S \ОмF jVi\]0P7$~J&"xyP|Z*p ?+I Ҧ>Q?^=~Ғ~2S+P+Tp yi>H1$䰇p[Mz{>Ĝ[G="( _ɟ{ƂB@]@~"@]~&{}n;(u y8%֔#Rad2b؅1zE}Quo|qTJ0q"Ksg)C#"z X}Ѯ8{&jO)rO\?JfX/&^ !'^3,NXz_if'Bq>Y8EdY1Pt$Ծ/ aRd|Pdz1- 6 9 +sk/z݇OdyE 2cd(r9O=RLQ b/M${c>Lr8zByR&4ZxKl59)HDZ pq/%Il߷#@F9@aC !c'INr^,Jz6JV*!A!P(0Rd”$Oq,8.' Bsкy}A,]wj8.^ZF({ 0QJ'RyvuVc Y 5,:aL;PlA3)h9B:F#(bb ˑe}O lo`;t'`|^2)#Ӓ! Kb 92/$D#~A Ź)C Qz.G[1ǚ%fRBM7+S߶oN7Dy!\>)n QT;9:9Wj8.|μ279ڀ#yDi Zc|9녳e !PemDR`~>,k ((aC`< 0loT[fL{9MTjLH[6!]AkA Oers&# >VF'm !KEDx_37ef.?"?Zxo8gc ѡJ!~$ _L&88Oإ`؛_f[s.9~A0pND+"",LDX&O,32~KZ1"kcC9??/$\" D6²̇@}=0,r2Ѡ4ģ(RM %).w-Vh k7ɯB8)4hƕޕ6r$Bx>ۭX= */K-d7HI%XEآm)v*/##HqnR.|XUKB2#?ܼXW`|m?b7HD)xss &Vk︇,$9y DNP]jmiEK8^{f]|>Mr1oZ˺;ҖTS;o︽n︸ŵ~-wQ6nx/8kWO8D}tëNBa;f6}y"C>fr:,5 ]C BT3^:њ"$֎&Om;s>x_Sf$[wl 1j` Ni`9 յXܴ6\2w:jBO2iuzϞNIţuOCfO6)ڈSw~~lݪU_٭ Z6|Fn{@\?5rնWNFcq'96A|="f|s})5J]Znɗwּ%_ֶ82a.5'-֝Y~sAz7XVu7P >UY+mJw >1i$## 8HxB aN0ZpJw'{a/H#:fm&ErBy ʑ$$¡tKS"eb@VkS䄑V0߆? -4KM 8Ux&'.BF"XKUVBv*= ʐRu4:: g!nS69M .䉛5X< S%=&@t!`T2Ȃ'xaJɺ[EW:]MY^࣫E=gT\|+EQ}a`b,aJ ı DmO߬)|Ip=NSBC;Z3 @s!#0ʯ6F 6ݢL94nRrBsFJ聥^a0` MC 9bHZbNVvp:]6[wp:X> ,$<" ;L`,cT'iYkbv ^kx u:sbHLB`&=YgNu}Xl -K -}ՠmjUٛ`XރG, 9SZ%d۩6ЍV-"m~0]ůj:D* w03[M8T~+=b&\04YAuAu)FqxZWd?W@UaܪKkL9L;uA7AoM[ 7gSvO7 +-qPh 4W2$#֣-a8l1ݞ, 3 壆}q} B 4 QTFXj>YU&vէDtMaJK#h>Χ4E#Que^AR̘Pp/|UB @DT.TT5G!"c[=iWkI{PwbfP\&spoNe{Hՙt6)m+k!>M*Zhj1}1i1/0`C_jɩVF,8Q {s7=݇ 4Y +c}8";_~h"dYI]Vk֊ljx"c'"ɱz)4<j43EZw{9K9n=.5j@UjǰrO8A镸il/UF%uRGD V2B7ئf< KÄ<O%!DZ7!QTF_hZTu/_ɂBL$p,Ġ%! Eo e(.E垸a3z3h@F #c@DGg ;\6)'\]&Ԩ!ч*q! ˢRHӣv'^}/T舟?:D!EV^X0G7ݚv5M:wY1Zb_ Zé2ՠJhu3f}ޒR㫫Kj2S(,*gȥjtoyw|ߩzh5 w^n j,^<*!$"ZC2XwxP\IZ74 d._0RXQ)( 1VKj2\dwj`a(Vhf^k*jrԡ8DJiQقH Cy=p%JtF3I %`!y+t;[PˮmyRMkl  |"mjJ(by#'\il4&ڼImglf}S<ӫJ(.z惕FPQӔL[5҃fi^*m<79J90Qk0P)<+E`Nb+YƚuhD y$*ߝgozP$I4mOn^{L[[Ӣ-^sb,HOs+ʸNИOLO#[-S85OЂTig?ia0Ftt&m7P{uм'2O_zbs{UD0FSo< [{BVUNLʘKQ$xt9* YQ-ߠ`U+뢦sa6CZ1x;]^6y/'8' $Gg|W}A(Gӻ˿5;y&/_LҝNk.5lӨ7ZP;`,Hy.$ϡ3.i4ZS~+J_F*o- g1+W52D֫~iPӊI~]sn{G8k$iUe%j~:ڳuJM:X> w)%T܁jxps*|Ԋ u:j^>+c)Ø^z˽g/%bĥ)%e<4 3?ЃɔVN'AMeW?ݝ_4}b iYXټ;FRlw#]8e9޻=1aRϯa"bZ߫yE)ڈSt~ZnT9i:mNyPw֭LC^9EJʝb4>zcͥ/~|f .T`ݡog{*tigE՛J Fvqa5Xʰ`Q`1+:i"דx:9/?'cJו {|3\r<{JXnLQh9j"1y,w%vb,4Q3QR}Jued`Qy2=lUOׁ5Ѫ,4G6w,͋Hו9z }<.FmRRO#+1+Xjc $UCi&L`7Ω^ v:EVFeިyM܍˼UVwǹusKdz" Lo`#KԔt],̓T{´FXJ8uC-bԋG)+} hJGW꾼"ja+fFWa45*DƴL4HCM(b/i_9¯ 4iA> ֨kģF>͔>EOT5_{Ŕ;pʫhcrU%5k?/۬zNma!b#۔9yb2z 7c ?vu/t5& $3'Pw?>_eQ+ 8[<ڪ$E4E¦2%=_]L2i`Gqi-ǁXʽo<#J]uI`\sWW"??>KO<hR&g5͜Էro\m&Qks Z֢ڤb;Rqj# ߑ.H{j2(eY/GYd-\;BKҲnNî%"9\2ZQ;yA2A>H^Y Vj/wKSE ͭ0bpuZCFkwmmXyiJ!@' tcz0xIq,w:3_$ۥEMGAVu1*[`+|Zisccxp 'ֆq *:ֶrJ;*삭\wCeuޚos4Tem<=zE; U8%͌O}M?]?fcwo(-/$?~|3bŦބOiQ_Myx,/+$p{-H"EcM=/9N$#3Z.ǟ}qǠsn@҂i3K5٤QE,H" ,Nkwr]ҧH AMj;@Lgf][lU=4gzf_,efsh2Z#h'YUSW6"VU' h$8Mn|ztI39*th {ٸ[Kl[>336%Ex=S!(GWsQÀF]Wn}"͚e':g:MqLfH3 ny42&!P52y:}qҔ.I;ז# ըOyd>G70S7'n>č%?+SfE*8*+qc9 ǝJ`a,1\ l1$Ғ E7$8q+ͤ/%TrAdP~jIgo݇yi]3/2hSa.m`fEI:D]cBMPM) +FPWjRͿNfwnDg7Ңշ"QXW=|(>ǭ;G )~ۜ@k L_El:_>,52݋DŔHb~kA:G@VTAvpWT,*WV#mu7tu+A-4gђƏ ?YQ_Mc#/074m5pa`Ce4yNAC1/g3wE<DпM PYAVf?LjVPGEuz33 J/REuEQ]GQtE|H9B5B e8rLǭEf+ awquxB35ZϾ"߯V?kv/h #,<mAx9 z3l痷bj)pͽ)'Sq#V%B#sAJ"Ek@NMŜ v $-so@v]] dD+9| ݅"i1`_Ol\ I,|*j2l.a5a'7;4\o;~sȃTߤZ+}iyΗ2Xm1>7B>Wu`v'4XO?ͺ F(N=;L{{ҥd{  \{nD[jHo-r_hnNVVXBaM/A֬MĀ 'pD##kÞ?G}i>Wp)*VUsd.}F.yy3q-p.&j\Pv\^_ :!\s=vs6/C.'o:7dܸ\J󘠈1Dce5A'0ZPx ?ѡZ3FcT%^ra;bwFދ 7k.zYGmE4*-ŲJ/K">\J 2!4qwAàn q@ uCv n/S~j.ūz^UJy QX]O=r!^LNNEbrٗ pJY%4)_9õdzvahL)b+,/' 1}tO?6bI(=6bIy8Β^q (v8pLygFg];dqKI7f8AL. `N~QÁ9? ݸD?Y?3n| }m=zm02 T_&v 0]{0LyEkݿltUYunmb(NApI8P(9PjᗴvmFHⷋ%)'y}w@Ƀ@svvP QJvɛW x#38*4ҪEuLi2uj6W;E@%aC Z`P3+;C [_, Rr,b~ tLS%)/t`2;;k)PRFώ{(.u_IؗqK`Pv2P2ž#%B. LD۫kɎ1fD "UpAjn/(Nm˔ oOp#W ۿQ"zqlƼim|IZՖ)5*{PcέZ0Լ)pD,LBYyj _-|Ѿ~ P9w׬w.[D"Zz-X&^ YX[/ Y%rIK=aG-|,ˎ)6j_+J^\\w~/Ewu~)-Y`S=T%V%RC F^LgWE.UbBfl9D N;B.Ls~3*M-zCǽ;wBނ{^`'8i焇 $cxZbBxA!D{P! bA`6Gof&=CrާӃqfNIXАz8V=$E"8'jT?h'4QNNa~x3^K74G#"?vR]MlVJp1ZDhM'2=q3iplq;{'.Z$٧ R5lm,:#bSʑNy\Es+)Jch `L44SzxoWbuo:='yubTD ːe *sc&l=Ht t.PthKM3F>UZ6 BX@bBcӫՄl&rIZ ѤC5HSc"o2{)JJRH;ݭw2W?>@3o;b(0ڎ}'k#]c[|RȲk<[F0,_#A||[u{mvGrkJZKB{L)=3T?%Y# mM=P.vGA58^5肋I'zc]MŧyT87> 1%Lv^4KKgʁϔ͛C?&J\qq>;kZœȦU̶@o0k/ڢOцCo![x8_|~i %3HɌW-ehܶC P sKӶC@ ӑ-d1 z:d凿^励+OLـ>;B ǃݻ+5Эi>MϺ)3m(e9~^uyЖma)D\Dٷ&(-丯ysgMؿ)_ǔWwŚgW :}ړ?f>b3 0 F χt#nqV~IVwmZJ:swzJNj'yh*Ld2UOA. jD6{润.A2&$RŴLt !xX¯*EB)N\co-_~Y &6 }w~3RE"h^X? UtL̂?vMζ7{ Ӵ sM;;/>@cI=MІ,-1Z`#b;G CɉHm*N7O׿\79I_cfU&?_MB*Q-DEYUhs?:r"V4X8 VXBJs42]^ϧ3*m'[Dbm֣h-Xd[ <+xVLu^fEtЂX:T>ŧלc0JQOL/=3 ~EaXQK1 [$Z$r9|%?N?tcb%^surEҭg/1KE3zm:^0Fc8y TeFo׋Nr/wmm~9XRyY,\iYv$ٹ?EI۶dweq8r,5& j5)^'0K|%YB~G"N>J$ޜK=&D` :YEyiy-T5f_h-!&H)rW^Kv+z(O]/2D4|˅ Zg$A)JD\&h(3 `|3f<`4@8r^ I)ˁ$H s,EVLBBP &˕pd>O.Nl#K:@%pL%l6 ݁АjN/[*,*zJtZ&%,2(t&9ZHNߖD#eNDO sg=uGCI9TV㾐/'2*"fȊF,C&hJ5nw6PI/r:y" ^Jzj0.H$R%I@(.h.Y".%"Pa\/Hξ?UK!!,Rcݦr1 ,C/$}p}?|_ -+1y"$+;!$|0k]Hͥ_eƜV%K1X@#拏J mDҨD,0,dy@j 23ח%@`7)ON* c<9r.rM\(S ui d͠`~5x?;*Xr I*\t`9䒃ؗ\kic-6jKfسqJ紜 h9D:B WT}HqF*w gwtVs;L.en35m̭o!))wRP  )H;궟 ]-LbHYkkxRjvVz }@ ccmmٙ!L% `"Ҳ%5,I "X cq*()}-P&fǀ8b3%蓺)%[.d%[ 񺶔۔!' GP(K2ܼPfh4#t}\Vl=/FnJk/oI7)ktvLNv+AYB!gj{$V# [H(~3Sv~ӓ=h~ç ̺6My6r`pyNl $ǿ'7tqO!?/lRX)ZekdzxR~?>"?Fy&dm^4ikUs޹7w>^ҫF֊=V]~nZ݁H9$ٲn@1\NGӸ|,~LȦM;4hPp N` (1V"a^ݢ'ojlwK+&J3ڬtNGieEsC{ ++/!m vKvB-fP3WB:ۯ~| ?:| So;n?@ޒ*nQݿϓ\6(#rѽ |wV;WwµHGyvz2ZsQGa&8fHBBtV"oG{1Zw8=D @3Mƶ;V=/)4_~v;v"'%ZQ׋͛MKϊ5;T6g'xr戤,A׀xDHpscć1AGBK=&H'I*wQE&tȇYrcJ.E&üT&(Lqdqr4Q!`J ڈ͘`jA5 +!Ry Q5#qSERG 1E"\$,ʝ"ƪ6ƺʮA?VzkNYEZL>OY%0B&43Z7OS!˝3:,kWqtѣمofe*(M>勨*ESJ"4g"6tDCN>Kbc$% xġr"{$篣B1 fc9MSQ$)$X3%φJUjAME&O§\(%Cn!kOlpLU7iE K !e&KmR&wX\@KYIw $eZX nI9fT1j_3ڈ@(S6X.yݽo{ d]g^}޷ӅЏ*8FzA$fuw5xlF HN>fZ7o"z.B4$hQi1O̥ |Qhhe_uw-=, ɤ6+pU,?XQ@Sh]k!*rX΍U2Q˩S-# )&>Ŭ*XEX$J70:29^W >bM ؠujZgDO bgjSLlH|*o-a3"A9N&e꾾 /:ZPV}?R^+ '$2f(_lI5y #6B[Qߝݲm_jd@ FQw^pYP)sagm8Դ!;6-ZH|r!W)*s~g):*cTɝݕ\npgxrR@4yAvYIԖ کzNh06gNPYpNMR"cK%8 '> z.~W $Ob Q5RAR27< ſ w.nՄ(:s[-5z!RF&VԶ]vuW;[/<YlM摭M c0bƜ;;֒˜dJrARsR5Ro2ف m417 M1"\,eૐizBmь5=aGOoth~2_xR.j#hM Sreffh?Y*ql;`qEPX^u;a_ldO?^WsQ ,àXrf2>S0Eֺ,vc֑h` Ϯ)+Sj[Ԕ(4d@| )2`Cj"kHMQO+Q@r&xWR!C:bhVg[ml꾾;m0[zݛ:ʹ(Y\4-NOӷw-f~mdb.&*8Jc=[8GYt-y|5#w!`\\O ,bњw=cTѻEi xcZ3Cndȿ66%ڰ̉ZvoXN Vwؔ~ӓOM'LjR}zkNz|Aѵip:rPjлhBDp y@X FP6=']U"%׺74( i\4z)Afo8-c4E+׺6f"ߵ#}.`QcbOF$"t$TdbJ{/$&T BDs -I U{#F-{ A~>{sLӈ'9Sn8gw? 1ٛٛK9N_W+-[Sg& %ZADNzEXp`c;0NF$A4'r憩"P#(QV>Sv#l>᫨D'gW_+^Dd7-ۃ\QloW.yBZ*O?P^yDVN)l"|$T"KI)t.FVԇDYn /Iufo>NmV2stn~Q^i'n`E}rՆ` zfiP '@ї,J8o0MQ(R"谚lyo[8$hp24->!FI8Ft说cB4q6;1 x!3^>4T\#ENv>n9nuUrBZH:\n Z+۳ug96 {>vCrjyb*LcyN_>nJ9^M3jߍ\ՓC.9"z7`w-^%N~vEsVrTy{3~έfZPE]pc2zwv~MU͟=90jl19]hIzLy \^07S NGz?+ ZYoXuAo1)#ƠRagroZU*{ kmhde@yԓR>|s5^}X86iI]NW5Hg^E"-NmkIdtG0mCSېƪSP]-eCZUE@Nd?rL_KjUTXWъSTPD9zrV"P3a\'l4zjηH_YA_׼TSOۑ"|hw3UDwv̀o;\,"pÅ$-3f|~Xy/;־P-%Dl< hg櫷f0•sTp< U2Kю8f+}sDP+{ھnz m9eo!^pcm$=WEV T^Yż\FƉJ1OSJj 43Gvˮ{] 2:_JRP(^ 32pLHK"P< Du4tj(,2I{2M(2OUF8/1*(I}p9L\$IރhUX>ޞą z9m$8]KuoNu,,(P^G#禥)ځ w&fSʄvZj't /0X˓ >K1$]CGG킍NI @ UYAPPVP\VЊ)CY ta^v m-8"9QRHӞA=P\<~Y8%P_C *d4U˸k(Tw~V w ;Uq:2#NJ1m#=`?d Be.cGD]}¿t:7QuSN>U>mS{)[YmJ,w1AF\]`;,Y"aɓ93_gC@RKO}sNkAp:p{锳Orw8}( `Ż3A$ ($B))JIXV HPqT>YԽ`sEi`U&Dy;!ʤtчm~K)eRtZmsJcPB *[*NHcPB tԼ"EɼwךAq@:H^KiYdCa'1WH?q?var/home/core/zuul-output/logs/kubelet.log0000644000000000000000005145237215135655002017710 0ustar rootrootJan 26 10:54:59 crc systemd[1]: Starting Kubernetes Kubelet... Jan 26 10:55:00 crc restorecon[4614]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 10:55:00 crc restorecon[4614]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 10:55:00 crc restorecon[4614]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 26 10:55:01 crc kubenswrapper[4619]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 10:55:01 crc kubenswrapper[4619]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 26 10:55:01 crc kubenswrapper[4619]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 10:55:01 crc kubenswrapper[4619]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 10:55:01 crc kubenswrapper[4619]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 26 10:55:01 crc kubenswrapper[4619]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.106425 4619 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.110997 4619 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111025 4619 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111034 4619 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111041 4619 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111047 4619 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111055 4619 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111063 4619 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111077 4619 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111083 4619 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111089 4619 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111094 4619 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111100 4619 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111105 4619 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111111 4619 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111116 4619 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111121 4619 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111127 4619 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111132 4619 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111138 4619 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111143 4619 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111149 4619 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111155 4619 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111162 4619 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111167 4619 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111173 4619 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111178 4619 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111183 4619 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111189 4619 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111194 4619 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111199 4619 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111204 4619 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111210 4619 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111215 4619 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111220 4619 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111225 4619 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111231 4619 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111239 4619 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111245 4619 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111250 4619 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111257 4619 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111263 4619 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111269 4619 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111275 4619 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111281 4619 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111287 4619 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111293 4619 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111298 4619 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111303 4619 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111308 4619 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111314 4619 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111319 4619 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111324 4619 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111329 4619 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111334 4619 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111339 4619 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111344 4619 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111349 4619 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111354 4619 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111359 4619 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111365 4619 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111370 4619 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111376 4619 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111382 4619 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111387 4619 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111393 4619 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111398 4619 feature_gate.go:330] unrecognized feature gate: Example Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111403 4619 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111409 4619 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111415 4619 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111420 4619 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.111426 4619 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.111778 4619 flags.go:64] FLAG: --address="0.0.0.0" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.111797 4619 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.111808 4619 flags.go:64] FLAG: --anonymous-auth="true" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.111816 4619 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.111825 4619 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.111831 4619 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.111840 4619 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.111847 4619 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.111854 4619 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.111860 4619 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.111867 4619 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.111873 4619 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.111880 4619 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.111886 4619 flags.go:64] FLAG: --cgroup-root="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.111891 4619 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.111898 4619 flags.go:64] FLAG: --client-ca-file="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.111904 4619 flags.go:64] FLAG: --cloud-config="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.111910 4619 flags.go:64] FLAG: --cloud-provider="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.111916 4619 flags.go:64] FLAG: --cluster-dns="[]" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.111929 4619 flags.go:64] FLAG: --cluster-domain="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.111936 4619 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.111943 4619 flags.go:64] FLAG: --config-dir="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.111949 4619 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.111957 4619 flags.go:64] FLAG: --container-log-max-files="5" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.111965 4619 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.111972 4619 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.111978 4619 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.111984 4619 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.111991 4619 flags.go:64] FLAG: --contention-profiling="false" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.111997 4619 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112003 4619 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112009 4619 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112015 4619 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112023 4619 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112029 4619 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112035 4619 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112041 4619 flags.go:64] FLAG: --enable-load-reader="false" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112054 4619 flags.go:64] FLAG: --enable-server="true" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112060 4619 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112069 4619 flags.go:64] FLAG: --event-burst="100" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112076 4619 flags.go:64] FLAG: --event-qps="50" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112082 4619 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112088 4619 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112094 4619 flags.go:64] FLAG: --eviction-hard="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112102 4619 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112108 4619 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112114 4619 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112120 4619 flags.go:64] FLAG: --eviction-soft="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112126 4619 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112132 4619 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112138 4619 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112146 4619 flags.go:64] FLAG: --experimental-mounter-path="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112152 4619 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112158 4619 flags.go:64] FLAG: --fail-swap-on="true" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112163 4619 flags.go:64] FLAG: --feature-gates="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112172 4619 flags.go:64] FLAG: --file-check-frequency="20s" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112179 4619 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112186 4619 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112191 4619 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112198 4619 flags.go:64] FLAG: --healthz-port="10248" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112204 4619 flags.go:64] FLAG: --help="false" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112210 4619 flags.go:64] FLAG: --hostname-override="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112216 4619 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112222 4619 flags.go:64] FLAG: --http-check-frequency="20s" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112228 4619 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112234 4619 flags.go:64] FLAG: --image-credential-provider-config="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112240 4619 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112246 4619 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112252 4619 flags.go:64] FLAG: --image-service-endpoint="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112258 4619 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112264 4619 flags.go:64] FLAG: --kube-api-burst="100" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112270 4619 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112277 4619 flags.go:64] FLAG: --kube-api-qps="50" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112283 4619 flags.go:64] FLAG: --kube-reserved="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112289 4619 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112295 4619 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112301 4619 flags.go:64] FLAG: --kubelet-cgroups="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112308 4619 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112313 4619 flags.go:64] FLAG: --lock-file="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112319 4619 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112326 4619 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112331 4619 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112341 4619 flags.go:64] FLAG: --log-json-split-stream="false" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112347 4619 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112353 4619 flags.go:64] FLAG: --log-text-split-stream="false" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112359 4619 flags.go:64] FLAG: --logging-format="text" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112366 4619 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112373 4619 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112380 4619 flags.go:64] FLAG: --manifest-url="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112386 4619 flags.go:64] FLAG: --manifest-url-header="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112394 4619 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112401 4619 flags.go:64] FLAG: --max-open-files="1000000" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112409 4619 flags.go:64] FLAG: --max-pods="110" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112415 4619 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112422 4619 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112428 4619 flags.go:64] FLAG: --memory-manager-policy="None" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112435 4619 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112442 4619 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112448 4619 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112454 4619 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112469 4619 flags.go:64] FLAG: --node-status-max-images="50" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112475 4619 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112481 4619 flags.go:64] FLAG: --oom-score-adj="-999" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112488 4619 flags.go:64] FLAG: --pod-cidr="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112493 4619 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112504 4619 flags.go:64] FLAG: --pod-manifest-path="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112510 4619 flags.go:64] FLAG: --pod-max-pids="-1" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112516 4619 flags.go:64] FLAG: --pods-per-core="0" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112523 4619 flags.go:64] FLAG: --port="10250" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112530 4619 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112536 4619 flags.go:64] FLAG: --provider-id="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112543 4619 flags.go:64] FLAG: --qos-reserved="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112549 4619 flags.go:64] FLAG: --read-only-port="10255" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112555 4619 flags.go:64] FLAG: --register-node="true" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112561 4619 flags.go:64] FLAG: --register-schedulable="true" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112567 4619 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112583 4619 flags.go:64] FLAG: --registry-burst="10" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112589 4619 flags.go:64] FLAG: --registry-qps="5" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112596 4619 flags.go:64] FLAG: --reserved-cpus="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112601 4619 flags.go:64] FLAG: --reserved-memory="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112609 4619 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112638 4619 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112645 4619 flags.go:64] FLAG: --rotate-certificates="false" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112651 4619 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112658 4619 flags.go:64] FLAG: --runonce="false" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112664 4619 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112670 4619 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112676 4619 flags.go:64] FLAG: --seccomp-default="false" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112688 4619 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112695 4619 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112701 4619 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112708 4619 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112714 4619 flags.go:64] FLAG: --storage-driver-password="root" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112720 4619 flags.go:64] FLAG: --storage-driver-secure="false" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112726 4619 flags.go:64] FLAG: --storage-driver-table="stats" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112732 4619 flags.go:64] FLAG: --storage-driver-user="root" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112738 4619 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112745 4619 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112751 4619 flags.go:64] FLAG: --system-cgroups="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112757 4619 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112766 4619 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112772 4619 flags.go:64] FLAG: --tls-cert-file="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112778 4619 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112785 4619 flags.go:64] FLAG: --tls-min-version="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112791 4619 flags.go:64] FLAG: --tls-private-key-file="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112798 4619 flags.go:64] FLAG: --topology-manager-policy="none" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112804 4619 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112810 4619 flags.go:64] FLAG: --topology-manager-scope="container" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112816 4619 flags.go:64] FLAG: --v="2" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112824 4619 flags.go:64] FLAG: --version="false" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112833 4619 flags.go:64] FLAG: --vmodule="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112840 4619 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.112846 4619 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113032 4619 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113039 4619 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113045 4619 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113051 4619 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113057 4619 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113062 4619 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113068 4619 feature_gate.go:330] unrecognized feature gate: Example Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113076 4619 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113081 4619 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113087 4619 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113092 4619 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113097 4619 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113102 4619 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113107 4619 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113114 4619 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113120 4619 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113126 4619 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113131 4619 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113136 4619 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113141 4619 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113147 4619 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113152 4619 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113157 4619 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113163 4619 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113168 4619 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113173 4619 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113178 4619 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113183 4619 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113190 4619 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113195 4619 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113200 4619 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113205 4619 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113210 4619 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113215 4619 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113221 4619 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113226 4619 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113231 4619 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113236 4619 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113241 4619 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113249 4619 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113254 4619 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113259 4619 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113264 4619 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113269 4619 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113275 4619 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113280 4619 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113285 4619 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113290 4619 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113295 4619 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113300 4619 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113305 4619 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113310 4619 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113316 4619 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113322 4619 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113327 4619 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113332 4619 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113337 4619 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113345 4619 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113352 4619 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113358 4619 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113365 4619 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113372 4619 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113378 4619 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113384 4619 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113391 4619 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113397 4619 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113402 4619 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113409 4619 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113415 4619 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113421 4619 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.113427 4619 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.113439 4619 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.122114 4619 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.122153 4619 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122222 4619 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122234 4619 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122239 4619 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122244 4619 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122248 4619 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122252 4619 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122256 4619 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122259 4619 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122263 4619 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122266 4619 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122270 4619 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122274 4619 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122277 4619 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122281 4619 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122284 4619 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122288 4619 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122291 4619 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122295 4619 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122299 4619 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122305 4619 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122309 4619 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122312 4619 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122316 4619 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122319 4619 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122323 4619 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122327 4619 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122330 4619 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122336 4619 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122342 4619 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122346 4619 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122350 4619 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122354 4619 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122357 4619 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122363 4619 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122367 4619 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122371 4619 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122375 4619 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122378 4619 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122382 4619 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122385 4619 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122389 4619 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122392 4619 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122396 4619 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122399 4619 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122404 4619 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122408 4619 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122411 4619 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122415 4619 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122419 4619 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122422 4619 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122426 4619 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122430 4619 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122433 4619 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122436 4619 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122440 4619 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122445 4619 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122449 4619 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122452 4619 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122455 4619 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122459 4619 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122463 4619 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122466 4619 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122469 4619 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122473 4619 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122477 4619 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122480 4619 feature_gate.go:330] unrecognized feature gate: Example Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122484 4619 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122487 4619 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122491 4619 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122494 4619 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122498 4619 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.122506 4619 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122656 4619 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122664 4619 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122670 4619 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122674 4619 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122678 4619 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122682 4619 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122686 4619 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122690 4619 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122693 4619 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122697 4619 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122700 4619 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122704 4619 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122709 4619 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122713 4619 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122718 4619 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122722 4619 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122725 4619 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122729 4619 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122732 4619 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122738 4619 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122742 4619 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122745 4619 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122749 4619 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122753 4619 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122757 4619 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122761 4619 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122764 4619 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122768 4619 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122772 4619 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122775 4619 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122779 4619 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122782 4619 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122785 4619 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122789 4619 feature_gate.go:330] unrecognized feature gate: Example Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122793 4619 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122796 4619 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122800 4619 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122803 4619 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122807 4619 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122811 4619 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122816 4619 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122820 4619 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122824 4619 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122828 4619 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122832 4619 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122835 4619 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122839 4619 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122843 4619 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122846 4619 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122850 4619 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122853 4619 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122858 4619 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122861 4619 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122865 4619 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122868 4619 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122873 4619 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122877 4619 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122880 4619 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122884 4619 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122887 4619 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122891 4619 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122894 4619 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122897 4619 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122901 4619 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122904 4619 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122908 4619 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122911 4619 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122915 4619 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122918 4619 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122922 4619 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.122925 4619 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.122931 4619 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.123347 4619 server.go:940] "Client rotation is on, will bootstrap in background" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.126030 4619 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.126122 4619 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.126606 4619 server.go:997] "Starting client certificate rotation" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.126639 4619 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.127004 4619 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-29 19:53:36.408776035 +0000 UTC Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.127056 4619 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.135407 4619 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 10:55:01 crc kubenswrapper[4619]: E0126 10:55:01.137004 4619 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.138729 4619 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.150682 4619 log.go:25] "Validated CRI v1 runtime API" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.172123 4619 log.go:25] "Validated CRI v1 image API" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.174682 4619 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.177371 4619 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-26-10-48-58-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.177418 4619 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.191991 4619 manager.go:217] Machine: {Timestamp:2026-01-26 10:55:01.190736862 +0000 UTC m=+0.224777608 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2799998 MemoryCapacity:25199476736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:6aae6ba9-96c1-4d99-8b9a-90adac40daa6 BootID:b26d7c31-8260-474d-b523-691101850253 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599738368 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:ca:37:a4 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:ca:37:a4 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:53:87:23 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:aa:70:02 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:3e:a2:0e Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:74:9f:ae Speed:-1 Mtu:1496} {Name:eth10 MacAddress:da:a7:a7:00:b7:62 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:16:1f:a0:9f:92:51 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199476736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.192260 4619 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.192434 4619 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.193312 4619 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.193589 4619 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.193654 4619 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.194059 4619 topology_manager.go:138] "Creating topology manager with none policy" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.194076 4619 container_manager_linux.go:303] "Creating device plugin manager" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.194282 4619 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.194320 4619 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.194554 4619 state_mem.go:36] "Initialized new in-memory state store" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.194749 4619 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.195948 4619 kubelet.go:418] "Attempting to sync node with API server" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.195984 4619 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.196027 4619 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.196046 4619 kubelet.go:324] "Adding apiserver pod source" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.196061 4619 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.198246 4619 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.198751 4619 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.199049 4619 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 10:55:01 crc kubenswrapper[4619]: E0126 10:55:01.199216 4619 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.199239 4619 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 10:55:01 crc kubenswrapper[4619]: E0126 10:55:01.199382 4619 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.199531 4619 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.200284 4619 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.200316 4619 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.200329 4619 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.200341 4619 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.200360 4619 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.200387 4619 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.200400 4619 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.200419 4619 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.200434 4619 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.200448 4619 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.200470 4619 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.200486 4619 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.200908 4619 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.201406 4619 server.go:1280] "Started kubelet" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.202213 4619 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.202894 4619 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.203082 4619 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.203319 4619 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 26 10:55:01 crc systemd[1]: Started Kubernetes Kubelet. Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.204247 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.204295 4619 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.207366 4619 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.207412 4619 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.207797 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 19:53:44.677383942 +0000 UTC Jan 26 10:55:01 crc kubenswrapper[4619]: E0126 10:55:01.209531 4619 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 10:55:01 crc kubenswrapper[4619]: E0126 10:55:01.209693 4619 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="200ms" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.209821 4619 server.go:460] "Adding debug handlers to kubelet server" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.211068 4619 factory.go:55] Registering systemd factory Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.211095 4619 factory.go:221] Registration of the systemd container factory successfully Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.212120 4619 factory.go:153] Registering CRI-O factory Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.212143 4619 factory.go:221] Registration of the crio container factory successfully Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.212203 4619 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.212227 4619 factory.go:103] Registering Raw factory Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.212244 4619 manager.go:1196] Started watching for new ooms in manager Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.213748 4619 manager.go:319] Starting recovery of all containers Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.215701 4619 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.223380 4619 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 10:55:01 crc kubenswrapper[4619]: E0126 10:55:01.223503 4619 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Jan 26 10:55:01 crc kubenswrapper[4619]: E0126 10:55:01.223262 4619 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.69:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e4292a4d6ab21 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 10:55:01.201373985 +0000 UTC m=+0.235414711,LastTimestamp:2026-01-26 10:55:01.201373985 +0000 UTC m=+0.235414711,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226268 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226325 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226343 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226367 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226384 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226402 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226418 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226448 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226468 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226484 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226499 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226514 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226528 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226547 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226650 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226668 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226682 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226697 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226712 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226746 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226762 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226779 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226793 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226808 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226823 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226836 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226855 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226888 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226904 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226921 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226958 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226975 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.226989 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227004 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227019 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227033 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227048 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227061 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227078 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227094 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227108 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227124 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227139 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227153 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227167 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227182 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227199 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227214 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227229 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227245 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227261 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227276 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227305 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227323 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227339 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227357 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227374 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227388 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227403 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227417 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227433 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227448 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227463 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227482 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227497 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227511 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227525 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227539 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227554 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227568 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227586 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227603 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227636 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227671 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227686 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227700 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227716 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227730 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227746 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227763 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227780 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227797 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227812 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227827 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227841 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227856 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227870 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227885 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227901 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227918 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227932 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227949 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227964 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227978 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.227994 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228008 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228025 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228039 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228053 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228069 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228085 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228100 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228119 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228133 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228215 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228236 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228251 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228277 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228293 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228311 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228329 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228345 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228362 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228377 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228392 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228406 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228492 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228509 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228525 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228538 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228552 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228565 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228578 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228594 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228608 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228643 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228660 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228677 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228692 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228706 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228721 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228735 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228749 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228767 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228783 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228797 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228811 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228827 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228842 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228859 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228873 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228888 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228903 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228916 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228934 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228949 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228965 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.228982 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229002 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229019 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229032 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229045 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229060 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229075 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229091 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229108 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229122 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229136 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229190 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229207 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229221 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229234 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229249 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229263 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229277 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229290 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229305 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229318 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229332 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229344 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229359 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229372 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229386 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229400 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229416 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229429 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229443 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229456 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229469 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229482 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229497 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229511 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229525 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229538 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229554 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229567 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229581 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229596 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.229648 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.230347 4619 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.230390 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.230408 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.230423 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.230437 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.230451 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.230467 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.230484 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.230499 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.230513 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.230526 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.230542 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.230556 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.230571 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.230585 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.230600 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.230646 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.230665 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.230678 4619 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.230692 4619 reconstruct.go:97] "Volume reconstruction finished" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.230702 4619 reconciler.go:26] "Reconciler: start to sync state" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.250834 4619 manager.go:324] Recovery completed Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.253894 4619 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.259759 4619 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.259835 4619 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.259875 4619 kubelet.go:2335] "Starting kubelet main sync loop" Jan 26 10:55:01 crc kubenswrapper[4619]: E0126 10:55:01.259963 4619 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.261583 4619 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 10:55:01 crc kubenswrapper[4619]: E0126 10:55:01.261717 4619 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.266413 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.268252 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.268289 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.268300 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.269142 4619 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.269163 4619 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.269186 4619 state_mem.go:36] "Initialized new in-memory state store" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.280110 4619 policy_none.go:49] "None policy: Start" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.281466 4619 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.281706 4619 state_mem.go:35] "Initializing new in-memory state store" Jan 26 10:55:01 crc kubenswrapper[4619]: E0126 10:55:01.310270 4619 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.349245 4619 manager.go:334] "Starting Device Plugin manager" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.349312 4619 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.349331 4619 server.go:79] "Starting device plugin registration server" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.349917 4619 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.349961 4619 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.350426 4619 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.350524 4619 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.350540 4619 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 26 10:55:01 crc kubenswrapper[4619]: E0126 10:55:01.359634 4619 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.360918 4619 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.361014 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.362120 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.362163 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.362174 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.362326 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.362654 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.362741 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.363156 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.363177 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.363185 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.363288 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.363644 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.363731 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.365008 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.365011 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.365035 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.365044 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.365051 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.365058 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.365543 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.365730 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.365855 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.366175 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.366295 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.366333 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.367890 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.367921 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.367934 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.367933 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.368050 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.368059 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.368063 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.368167 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.368203 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.369045 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.369071 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.369087 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.370310 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.370338 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.370347 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.370505 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.370528 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.371126 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.371146 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.371156 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:01 crc kubenswrapper[4619]: E0126 10:55:01.410470 4619 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="400ms" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.432834 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.432910 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.432945 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.432970 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.432997 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.433016 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.433037 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.433058 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.433078 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.433177 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.433226 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.433260 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.433296 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.433342 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.433370 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.451144 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.452480 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.452513 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.452524 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.452550 4619 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 10:55:01 crc kubenswrapper[4619]: E0126 10:55:01.453139 4619 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.69:6443: connect: connection refused" node="crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.535118 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.535261 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.535389 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.535504 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.535292 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.535409 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.535675 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.535651 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.535809 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.535847 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.535897 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.535916 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.535981 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.535984 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.536009 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.535924 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.536064 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.536086 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.536127 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.536147 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.536166 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.536185 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.536232 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.536728 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.536725 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.536808 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.536809 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.536846 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.536770 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.536894 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.654289 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.663189 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.663261 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.663277 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.663310 4619 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 10:55:01 crc kubenswrapper[4619]: E0126 10:55:01.663952 4619 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.69:6443: connect: connection refused" node="crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.709537 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.728588 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.738875 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.747546 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-4b601e3e51548a5b0a05ef7f51221fa8a834d038fc550ee7c7a9bde08912a824 WatchSource:0}: Error finding container 4b601e3e51548a5b0a05ef7f51221fa8a834d038fc550ee7c7a9bde08912a824: Status 404 returned error can't find the container with id 4b601e3e51548a5b0a05ef7f51221fa8a834d038fc550ee7c7a9bde08912a824 Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.756522 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.762109 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-0797b97bb477d9c7c1ff72143af10f27c5f01cbcf6b0a1afc01a1542fcfa9edf WatchSource:0}: Error finding container 0797b97bb477d9c7c1ff72143af10f27c5f01cbcf6b0a1afc01a1542fcfa9edf: Status 404 returned error can't find the container with id 0797b97bb477d9c7c1ff72143af10f27c5f01cbcf6b0a1afc01a1542fcfa9edf Jan 26 10:55:01 crc kubenswrapper[4619]: I0126 10:55:01.764774 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.774526 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-c0a6c7c8f62954e6125bc481d859153dafc5d7a01ad0a24c0d6964b02713c63b WatchSource:0}: Error finding container c0a6c7c8f62954e6125bc481d859153dafc5d7a01ad0a24c0d6964b02713c63b: Status 404 returned error can't find the container with id c0a6c7c8f62954e6125bc481d859153dafc5d7a01ad0a24c0d6964b02713c63b Jan 26 10:55:01 crc kubenswrapper[4619]: W0126 10:55:01.791222 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-badf589f63c45421ac847c75351ca6b65d0e46c4ee26c1a70004b6b2fa08cfe2 WatchSource:0}: Error finding container badf589f63c45421ac847c75351ca6b65d0e46c4ee26c1a70004b6b2fa08cfe2: Status 404 returned error can't find the container with id badf589f63c45421ac847c75351ca6b65d0e46c4ee26c1a70004b6b2fa08cfe2 Jan 26 10:55:01 crc kubenswrapper[4619]: E0126 10:55:01.812546 4619 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="800ms" Jan 26 10:55:02 crc kubenswrapper[4619]: W0126 10:55:02.015650 4619 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 10:55:02 crc kubenswrapper[4619]: E0126 10:55:02.015775 4619 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.064666 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.066127 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.066198 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.066207 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.066242 4619 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 10:55:02 crc kubenswrapper[4619]: E0126 10:55:02.066827 4619 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.69:6443: connect: connection refused" node="crc" Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.203980 4619 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.208888 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 17:49:10.351588 +0000 UTC Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.268376 4619 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47" exitCode=0 Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.268461 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47"} Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.268636 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c0a6c7c8f62954e6125bc481d859153dafc5d7a01ad0a24c0d6964b02713c63b"} Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.268787 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.270588 4619 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="450dddfc293d70b41641eda8ca7227b7f19bc8b253c718744224cccdf97a1c98" exitCode=0 Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.270656 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"450dddfc293d70b41641eda8ca7227b7f19bc8b253c718744224cccdf97a1c98"} Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.270697 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"0797b97bb477d9c7c1ff72143af10f27c5f01cbcf6b0a1afc01a1542fcfa9edf"} Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.270703 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.270734 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.270747 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.271259 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.272697 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.272716 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.272725 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.273266 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.274114 4619 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a98a2217c238f895c59c00102801bd7233787f028671e513b8c21927ae8afa6f" exitCode=0 Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.274222 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a98a2217c238f895c59c00102801bd7233787f028671e513b8c21927ae8afa6f"} Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.274275 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4b601e3e51548a5b0a05ef7f51221fa8a834d038fc550ee7c7a9bde08912a824"} Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.274498 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.274532 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.274544 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.274577 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.275935 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.275956 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.275967 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.276816 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed"} Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.276922 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"badf589f63c45421ac847c75351ca6b65d0e46c4ee26c1a70004b6b2fa08cfe2"} Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.281648 4619 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="1b89d58f8fe9ee1a688f79f658dac138818e547dcccdd952370b2de019f65cb7" exitCode=0 Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.281687 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"1b89d58f8fe9ee1a688f79f658dac138818e547dcccdd952370b2de019f65cb7"} Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.281711 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"611e4148a042f8711e972a99ab394b7a7caa8d04729e90ad41e7322c50ada86c"} Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.282095 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.283017 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.283040 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.283050 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:02 crc kubenswrapper[4619]: W0126 10:55:02.383787 4619 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 10:55:02 crc kubenswrapper[4619]: E0126 10:55:02.383915 4619 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Jan 26 10:55:02 crc kubenswrapper[4619]: W0126 10:55:02.551001 4619 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 10:55:02 crc kubenswrapper[4619]: E0126 10:55:02.551090 4619 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Jan 26 10:55:02 crc kubenswrapper[4619]: E0126 10:55:02.614437 4619 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="1.6s" Jan 26 10:55:02 crc kubenswrapper[4619]: W0126 10:55:02.806018 4619 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 10:55:02 crc kubenswrapper[4619]: E0126 10:55:02.806165 4619 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.867725 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.874151 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.874199 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.874209 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:02 crc kubenswrapper[4619]: I0126 10:55:02.874239 4619 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 10:55:02 crc kubenswrapper[4619]: E0126 10:55:02.875714 4619 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.69:6443: connect: connection refused" node="crc" Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.148272 4619 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 10:55:03 crc kubenswrapper[4619]: E0126 10:55:03.149378 4619 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Jan 26 10:55:03 crc kubenswrapper[4619]: E0126 10:55:03.197113 4619 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.69:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e4292a4d6ab21 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 10:55:01.201373985 +0000 UTC m=+0.235414711,LastTimestamp:2026-01-26 10:55:01.201373985 +0000 UTC m=+0.235414711,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.203591 4619 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.209663 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 07:41:54.589027518 +0000 UTC Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.299190 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4"} Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.299243 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9"} Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.299256 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39"} Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.299355 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.300467 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.300494 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.300503 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.307327 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"adac1c5d43727ca7872d61a7a205c3cffb45cd818d612abbe66d96158f8e16c3"} Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.307405 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f4d0fa82c1e0c7288072c19b175cc433e44b8ec49a1951b3286c032c350d9177"} Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.307419 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"684bfdf7352b2c2c2da47372847d8ad2da8f297db21df4a9ee95af1c911ed801"} Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.307588 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.308520 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.308544 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.308552 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.311718 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161"} Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.311749 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654"} Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.311761 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406"} Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.311769 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512"} Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.314124 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"b899b71eba7b32e4ac82dc4f861658da5dd6fad9b21cdd49df50c6687cfcc90c"} Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.314206 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.314960 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.314984 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.314992 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.320461 4619 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="469cc8b055a2e99577efc86d33f62f22c6390d4947b75f779b26f2a63875af68" exitCode=0 Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.320490 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"469cc8b055a2e99577efc86d33f62f22c6390d4947b75f779b26f2a63875af68"} Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.320574 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.324406 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.324425 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:03 crc kubenswrapper[4619]: I0126 10:55:03.324433 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:04 crc kubenswrapper[4619]: I0126 10:55:04.210477 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 00:17:22.135174202 +0000 UTC Jan 26 10:55:04 crc kubenswrapper[4619]: I0126 10:55:04.326985 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8"} Jan 26 10:55:04 crc kubenswrapper[4619]: I0126 10:55:04.327156 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:04 crc kubenswrapper[4619]: I0126 10:55:04.328400 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:04 crc kubenswrapper[4619]: I0126 10:55:04.328434 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:04 crc kubenswrapper[4619]: I0126 10:55:04.328449 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:04 crc kubenswrapper[4619]: I0126 10:55:04.329639 4619 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="41190c5262d846fb6eb8ca1f8eb63f62a081076ba648214f4584878190352d56" exitCode=0 Jan 26 10:55:04 crc kubenswrapper[4619]: I0126 10:55:04.329755 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:04 crc kubenswrapper[4619]: I0126 10:55:04.329838 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"41190c5262d846fb6eb8ca1f8eb63f62a081076ba648214f4584878190352d56"} Jan 26 10:55:04 crc kubenswrapper[4619]: I0126 10:55:04.330028 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:04 crc kubenswrapper[4619]: I0126 10:55:04.330742 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:04 crc kubenswrapper[4619]: I0126 10:55:04.330767 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:04 crc kubenswrapper[4619]: I0126 10:55:04.330779 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:04 crc kubenswrapper[4619]: I0126 10:55:04.331002 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:04 crc kubenswrapper[4619]: I0126 10:55:04.331100 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:04 crc kubenswrapper[4619]: I0126 10:55:04.331208 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:04 crc kubenswrapper[4619]: I0126 10:55:04.476746 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:04 crc kubenswrapper[4619]: I0126 10:55:04.478157 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:04 crc kubenswrapper[4619]: I0126 10:55:04.478189 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:04 crc kubenswrapper[4619]: I0126 10:55:04.478199 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:04 crc kubenswrapper[4619]: I0126 10:55:04.478224 4619 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 10:55:05 crc kubenswrapper[4619]: I0126 10:55:05.210876 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 19:50:41.255835751 +0000 UTC Jan 26 10:55:05 crc kubenswrapper[4619]: I0126 10:55:05.337820 4619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 10:55:05 crc kubenswrapper[4619]: I0126 10:55:05.337889 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:05 crc kubenswrapper[4619]: I0126 10:55:05.338382 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6071a43e179915476d4051e159c82a42544006d424aa36a5a81ef4efea75823b"} Jan 26 10:55:05 crc kubenswrapper[4619]: I0126 10:55:05.338429 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bca5e9ec76892ec00b79ecafda52f33afe0fcfe3b40bceaa76e586d95a62d054"} Jan 26 10:55:05 crc kubenswrapper[4619]: I0126 10:55:05.338654 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1136e39d9b7d5eff7086e9983574a1c22186a06d9c4d2b5d566d74202749f487"} Jan 26 10:55:05 crc kubenswrapper[4619]: I0126 10:55:05.338666 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ec7e9ee0bab7dd1ad1b12a3a8ad86ae68690e864d79c8804d8a3ad55cc63cd03"} Jan 26 10:55:05 crc kubenswrapper[4619]: I0126 10:55:05.338677 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5f0b03f072d78d4ff1bac8a10e9522f3b9af4121b08380e95a58b18e64fade8e"} Jan 26 10:55:05 crc kubenswrapper[4619]: I0126 10:55:05.338779 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:05 crc kubenswrapper[4619]: I0126 10:55:05.339460 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:05 crc kubenswrapper[4619]: I0126 10:55:05.339488 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:05 crc kubenswrapper[4619]: I0126 10:55:05.339499 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:05 crc kubenswrapper[4619]: I0126 10:55:05.340183 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:05 crc kubenswrapper[4619]: I0126 10:55:05.340205 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:05 crc kubenswrapper[4619]: I0126 10:55:05.340216 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:05 crc kubenswrapper[4619]: I0126 10:55:05.594018 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 10:55:05 crc kubenswrapper[4619]: I0126 10:55:05.594263 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:05 crc kubenswrapper[4619]: I0126 10:55:05.595596 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:05 crc kubenswrapper[4619]: I0126 10:55:05.595681 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:05 crc kubenswrapper[4619]: I0126 10:55:05.595693 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:06 crc kubenswrapper[4619]: I0126 10:55:06.211856 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 06:00:05.989591159 +0000 UTC Jan 26 10:55:06 crc kubenswrapper[4619]: I0126 10:55:06.860792 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 26 10:55:06 crc kubenswrapper[4619]: I0126 10:55:06.861006 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:06 crc kubenswrapper[4619]: I0126 10:55:06.862326 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:06 crc kubenswrapper[4619]: I0126 10:55:06.862353 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:06 crc kubenswrapper[4619]: I0126 10:55:06.862363 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:06 crc kubenswrapper[4619]: I0126 10:55:06.990559 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:55:06 crc kubenswrapper[4619]: I0126 10:55:06.990808 4619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 10:55:06 crc kubenswrapper[4619]: I0126 10:55:06.990862 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:06 crc kubenswrapper[4619]: I0126 10:55:06.992248 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:06 crc kubenswrapper[4619]: I0126 10:55:06.992315 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:06 crc kubenswrapper[4619]: I0126 10:55:06.992335 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:07 crc kubenswrapper[4619]: I0126 10:55:07.113725 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:55:07 crc kubenswrapper[4619]: I0126 10:55:07.213065 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 12:29:22.5501688 +0000 UTC Jan 26 10:55:07 crc kubenswrapper[4619]: I0126 10:55:07.343604 4619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 10:55:07 crc kubenswrapper[4619]: I0126 10:55:07.343677 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:07 crc kubenswrapper[4619]: I0126 10:55:07.344512 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:07 crc kubenswrapper[4619]: I0126 10:55:07.344547 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:07 crc kubenswrapper[4619]: I0126 10:55:07.344562 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:07 crc kubenswrapper[4619]: I0126 10:55:07.527771 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 10:55:07 crc kubenswrapper[4619]: I0126 10:55:07.528148 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:07 crc kubenswrapper[4619]: I0126 10:55:07.529714 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:07 crc kubenswrapper[4619]: I0126 10:55:07.529749 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:07 crc kubenswrapper[4619]: I0126 10:55:07.529762 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:07 crc kubenswrapper[4619]: I0126 10:55:07.530600 4619 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 10:55:07 crc kubenswrapper[4619]: I0126 10:55:07.533421 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 10:55:08 crc kubenswrapper[4619]: I0126 10:55:08.213978 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 23:37:46.747466988 +0000 UTC Jan 26 10:55:08 crc kubenswrapper[4619]: I0126 10:55:08.347092 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:08 crc kubenswrapper[4619]: I0126 10:55:08.349010 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:08 crc kubenswrapper[4619]: I0126 10:55:08.349089 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:08 crc kubenswrapper[4619]: I0126 10:55:08.349109 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:08 crc kubenswrapper[4619]: I0126 10:55:08.909781 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 26 10:55:08 crc kubenswrapper[4619]: I0126 10:55:08.910095 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:08 crc kubenswrapper[4619]: I0126 10:55:08.912092 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:08 crc kubenswrapper[4619]: I0126 10:55:08.912155 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:08 crc kubenswrapper[4619]: I0126 10:55:08.912167 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:09 crc kubenswrapper[4619]: I0126 10:55:09.202519 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:55:09 crc kubenswrapper[4619]: I0126 10:55:09.202847 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:09 crc kubenswrapper[4619]: I0126 10:55:09.204778 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:09 crc kubenswrapper[4619]: I0126 10:55:09.204851 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:09 crc kubenswrapper[4619]: I0126 10:55:09.204870 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:09 crc kubenswrapper[4619]: I0126 10:55:09.215169 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 15:39:24.853445203 +0000 UTC Jan 26 10:55:10 crc kubenswrapper[4619]: I0126 10:55:10.099191 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 10:55:10 crc kubenswrapper[4619]: I0126 10:55:10.099415 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:10 crc kubenswrapper[4619]: I0126 10:55:10.100852 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:10 crc kubenswrapper[4619]: I0126 10:55:10.100902 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:10 crc kubenswrapper[4619]: I0126 10:55:10.100919 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:10 crc kubenswrapper[4619]: I0126 10:55:10.215590 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 08:12:05.828912549 +0000 UTC Jan 26 10:55:10 crc kubenswrapper[4619]: I0126 10:55:10.723190 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 10:55:10 crc kubenswrapper[4619]: I0126 10:55:10.723464 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:10 crc kubenswrapper[4619]: I0126 10:55:10.725878 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:10 crc kubenswrapper[4619]: I0126 10:55:10.725920 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:10 crc kubenswrapper[4619]: I0126 10:55:10.725933 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:11 crc kubenswrapper[4619]: I0126 10:55:11.216348 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 01:20:19.980244805 +0000 UTC Jan 26 10:55:11 crc kubenswrapper[4619]: E0126 10:55:11.360481 4619 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 10:55:12 crc kubenswrapper[4619]: I0126 10:55:12.216531 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 03:42:00.920283333 +0000 UTC Jan 26 10:55:12 crc kubenswrapper[4619]: I0126 10:55:12.216656 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 10:55:12 crc kubenswrapper[4619]: I0126 10:55:12.216791 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:12 crc kubenswrapper[4619]: I0126 10:55:12.218294 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:12 crc kubenswrapper[4619]: I0126 10:55:12.218344 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:12 crc kubenswrapper[4619]: I0126 10:55:12.218362 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:12 crc kubenswrapper[4619]: I0126 10:55:12.220994 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 10:55:12 crc kubenswrapper[4619]: I0126 10:55:12.359039 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:12 crc kubenswrapper[4619]: I0126 10:55:12.360278 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:12 crc kubenswrapper[4619]: I0126 10:55:12.360312 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:12 crc kubenswrapper[4619]: I0126 10:55:12.360322 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:13 crc kubenswrapper[4619]: I0126 10:55:13.217230 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 10:14:15.606593694 +0000 UTC Jan 26 10:55:13 crc kubenswrapper[4619]: I0126 10:55:13.916480 4619 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 10:55:13 crc kubenswrapper[4619]: I0126 10:55:13.916588 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 10:55:13 crc kubenswrapper[4619]: I0126 10:55:13.929009 4619 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 10:55:13 crc kubenswrapper[4619]: I0126 10:55:13.929114 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 10:55:14 crc kubenswrapper[4619]: I0126 10:55:14.217323 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 08:20:58.516742751 +0000 UTC Jan 26 10:55:15 crc kubenswrapper[4619]: I0126 10:55:15.217449 4619 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 10:55:15 crc kubenswrapper[4619]: I0126 10:55:15.217680 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 10:55:15 crc kubenswrapper[4619]: I0126 10:55:15.218473 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 22:15:44.935295044 +0000 UTC Jan 26 10:55:15 crc kubenswrapper[4619]: I0126 10:55:15.761674 4619 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 10:55:15 crc kubenswrapper[4619]: I0126 10:55:15.761761 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 10:55:16 crc kubenswrapper[4619]: I0126 10:55:16.219501 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 06:43:52.264825129 +0000 UTC Jan 26 10:55:16 crc kubenswrapper[4619]: I0126 10:55:16.882380 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 26 10:55:16 crc kubenswrapper[4619]: I0126 10:55:16.882592 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:16 crc kubenswrapper[4619]: I0126 10:55:16.884020 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:16 crc kubenswrapper[4619]: I0126 10:55:16.884080 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:16 crc kubenswrapper[4619]: I0126 10:55:16.884092 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:16 crc kubenswrapper[4619]: I0126 10:55:16.894607 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 26 10:55:17 crc kubenswrapper[4619]: I0126 10:55:17.000356 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:55:17 crc kubenswrapper[4619]: I0126 10:55:17.000664 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:17 crc kubenswrapper[4619]: I0126 10:55:17.001197 4619 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 10:55:17 crc kubenswrapper[4619]: I0126 10:55:17.001280 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 10:55:17 crc kubenswrapper[4619]: I0126 10:55:17.002292 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:17 crc kubenswrapper[4619]: I0126 10:55:17.002323 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:17 crc kubenswrapper[4619]: I0126 10:55:17.002335 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:17 crc kubenswrapper[4619]: I0126 10:55:17.005744 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:55:17 crc kubenswrapper[4619]: I0126 10:55:17.220251 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 20:51:23.006980212 +0000 UTC Jan 26 10:55:17 crc kubenswrapper[4619]: I0126 10:55:17.374448 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:17 crc kubenswrapper[4619]: I0126 10:55:17.374448 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:17 crc kubenswrapper[4619]: I0126 10:55:17.375126 4619 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 10:55:17 crc kubenswrapper[4619]: I0126 10:55:17.375228 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 10:55:17 crc kubenswrapper[4619]: I0126 10:55:17.376337 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:17 crc kubenswrapper[4619]: I0126 10:55:17.376395 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:17 crc kubenswrapper[4619]: I0126 10:55:17.376415 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:17 crc kubenswrapper[4619]: I0126 10:55:17.376734 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:17 crc kubenswrapper[4619]: I0126 10:55:17.376794 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:17 crc kubenswrapper[4619]: I0126 10:55:17.376824 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:18 crc kubenswrapper[4619]: I0126 10:55:18.221343 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 15:14:49.821451471 +0000 UTC Jan 26 10:55:18 crc kubenswrapper[4619]: E0126 10:55:18.905886 4619 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 26 10:55:18 crc kubenswrapper[4619]: I0126 10:55:18.907784 4619 trace.go:236] Trace[1623963490]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 10:55:05.547) (total time: 13360ms): Jan 26 10:55:18 crc kubenswrapper[4619]: Trace[1623963490]: ---"Objects listed" error: 13360ms (10:55:18.907) Jan 26 10:55:18 crc kubenswrapper[4619]: Trace[1623963490]: [13.360704862s] [13.360704862s] END Jan 26 10:55:18 crc kubenswrapper[4619]: I0126 10:55:18.907805 4619 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 10:55:18 crc kubenswrapper[4619]: I0126 10:55:18.909262 4619 trace.go:236] Trace[683190847]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 10:55:04.445) (total time: 14463ms): Jan 26 10:55:18 crc kubenswrapper[4619]: Trace[683190847]: ---"Objects listed" error: 14463ms (10:55:18.909) Jan 26 10:55:18 crc kubenswrapper[4619]: Trace[683190847]: [14.463920684s] [14.463920684s] END Jan 26 10:55:18 crc kubenswrapper[4619]: I0126 10:55:18.909279 4619 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 10:55:18 crc kubenswrapper[4619]: I0126 10:55:18.909762 4619 trace.go:236] Trace[477221532]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 10:55:04.186) (total time: 14723ms): Jan 26 10:55:18 crc kubenswrapper[4619]: Trace[477221532]: ---"Objects listed" error: 14723ms (10:55:18.909) Jan 26 10:55:18 crc kubenswrapper[4619]: Trace[477221532]: [14.723221551s] [14.723221551s] END Jan 26 10:55:18 crc kubenswrapper[4619]: I0126 10:55:18.909791 4619 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 10:55:18 crc kubenswrapper[4619]: I0126 10:55:18.910545 4619 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 26 10:55:18 crc kubenswrapper[4619]: E0126 10:55:18.910547 4619 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 26 10:55:18 crc kubenswrapper[4619]: I0126 10:55:18.911522 4619 trace.go:236] Trace[1729387702]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 10:55:04.775) (total time: 14135ms): Jan 26 10:55:18 crc kubenswrapper[4619]: Trace[1729387702]: ---"Objects listed" error: 14135ms (10:55:18.911) Jan 26 10:55:18 crc kubenswrapper[4619]: Trace[1729387702]: [14.135693471s] [14.135693471s] END Jan 26 10:55:18 crc kubenswrapper[4619]: I0126 10:55:18.911558 4619 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 10:55:18 crc kubenswrapper[4619]: I0126 10:55:18.915725 4619 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.204681 4619 apiserver.go:52] "Watching apiserver" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.207523 4619 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.207876 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.208277 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:55:19 crc kubenswrapper[4619]: E0126 10:55:19.208352 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.208288 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.208767 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:19 crc kubenswrapper[4619]: E0126 10:55:19.208832 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.208842 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.208889 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 10:55:19 crc kubenswrapper[4619]: E0126 10:55:19.208979 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.209036 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.210233 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.211812 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.211917 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.212040 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.212917 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.214135 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.214469 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.214677 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.214790 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.216877 4619 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.221567 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 00:22:35.83197281 +0000 UTC Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.241297 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.255972 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.269276 4619 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:37480->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.269351 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:37480->192.168.126.11:17697: read: connection reset by peer" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.272546 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.283655 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.292381 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.307385 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.311928 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.311984 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.312017 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.312035 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.312055 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.312086 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.312110 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.312132 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.312154 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.312172 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.312189 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.312218 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.312241 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.312262 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.312286 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.312318 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.312342 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.312366 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.312394 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.312420 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.312445 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.312470 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.312513 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.312539 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.312578 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.312687 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.313088 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.313387 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.313564 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.313904 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.313896 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.314067 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.314323 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.314346 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.314352 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.314424 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.314591 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.314704 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.314805 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.314843 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.315038 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.315079 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.315105 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.315149 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.315171 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.315319 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.315381 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.315452 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.315515 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.315543 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.315704 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.315778 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.315742 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.315751 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.316033 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.316072 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.316228 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.316365 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.316389 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.316392 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.316448 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.316380 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.316705 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.316892 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.316997 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.317022 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.317063 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.317094 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.317833 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.317857 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.317878 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.317901 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.317928 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.317946 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.317965 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.317984 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.318008 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.317784 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.318401 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.318522 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.318531 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.318828 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.318905 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.318592 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.319147 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.319129 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.318028 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.319279 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.319309 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.319332 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.319353 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.319238 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.319651 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.319710 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.319746 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.319772 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.319893 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.320300 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.320495 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.320590 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.320681 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.320703 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.320724 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.320746 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.320768 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.320787 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.320806 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.320824 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.320845 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.320865 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.320927 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.320945 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.320977 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.320995 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321017 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321037 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321062 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321082 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321103 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321124 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321145 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321170 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321193 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321211 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321249 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321268 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321286 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321304 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321323 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321344 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321363 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321383 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321401 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321420 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321439 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321458 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321476 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321493 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321515 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321533 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321553 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321572 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321590 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321607 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321678 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321696 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321712 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321728 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321745 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321761 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321780 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321797 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321813 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321829 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321846 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321864 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321879 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321894 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321912 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321931 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321949 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321967 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.321983 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.322002 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.322019 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.322059 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.322076 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.322094 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.322110 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.322129 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.322145 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.322161 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.322189 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.322208 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.322224 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.322240 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.322264 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.322281 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.322297 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.322313 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.322329 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.322347 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.322883 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.322950 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.323241 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.323264 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.323266 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.323283 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.323379 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.323426 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.323467 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.323592 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.323655 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.323681 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.323708 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.323738 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.323766 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.323791 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.323819 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.323846 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.323872 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.323897 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.323920 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.323945 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.323969 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.324068 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.324095 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.324120 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.324147 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.324172 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.324190 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.324213 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.324238 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.324259 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.324279 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.324298 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.324317 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.324336 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.324355 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.324373 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.324391 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.324446 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.324603 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.324772 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.324940 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.325153 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.325314 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.325347 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.325770 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.325755 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.325800 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.325819 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.325829 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.327722 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.326127 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.326166 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.326323 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.326329 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.326388 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.326761 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.326785 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.327843 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.328028 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.328045 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.328410 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.328425 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.329396 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.327771 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.329523 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.329560 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.329585 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.329603 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.329650 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.329670 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.329667 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.329691 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.329703 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.329715 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.329725 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.329737 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.329758 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.329779 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.329799 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.329817 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.329835 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.329852 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.329870 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.329886 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.329887 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.329921 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.329939 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.329960 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.329978 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.329977 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.329998 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330019 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330037 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330055 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330111 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330130 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330137 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330232 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330266 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330293 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330308 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330319 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330342 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330367 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330441 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330486 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330509 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330546 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330574 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330595 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330703 4619 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330734 4619 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330745 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330755 4619 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330765 4619 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330806 4619 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330816 4619 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330813 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330827 4619 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330840 4619 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330854 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330864 4619 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330875 4619 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330885 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330894 4619 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330906 4619 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330916 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330924 4619 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330937 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330947 4619 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330956 4619 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330968 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330977 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330987 4619 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.330997 4619 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331007 4619 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331017 4619 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331025 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331035 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331045 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331054 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331064 4619 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331073 4619 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331082 4619 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331092 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331103 4619 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331152 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331162 4619 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331172 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331181 4619 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331191 4619 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331197 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331201 4619 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331246 4619 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331259 4619 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331270 4619 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331281 4619 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331299 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331259 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331297 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331421 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331434 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331445 4619 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332460 4619 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332481 4619 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332493 4619 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332505 4619 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332516 4619 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332527 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332539 4619 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332549 4619 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332564 4619 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332576 4619 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332586 4619 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332595 4619 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332606 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332632 4619 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332646 4619 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332658 4619 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332669 4619 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332681 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332693 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332705 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332718 4619 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332728 4619 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332769 4619 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332779 4619 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332789 4619 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332800 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332831 4619 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332840 4619 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332850 4619 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332859 4619 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332868 4619 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331561 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331727 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331883 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.331901 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332053 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332084 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332305 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332408 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332432 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332438 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332807 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.332989 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.334955 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.334973 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 10:55:19 crc kubenswrapper[4619]: E0126 10:55:19.335590 4619 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 10:55:19 crc kubenswrapper[4619]: E0126 10:55:19.335759 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 10:55:19.83572727 +0000 UTC m=+18.869767986 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.335900 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.336176 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.336298 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.336250 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.336525 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.336865 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.337010 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.337149 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.337182 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.337706 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.337830 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.338044 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.338114 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.338145 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.338387 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.338396 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.338816 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 10:55:19 crc kubenswrapper[4619]: E0126 10:55:19.338856 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:55:19.83864205 +0000 UTC m=+18.872682766 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.339250 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.339278 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.335131 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.339559 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.339788 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.338597 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.340047 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.340451 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.341461 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.341794 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.343144 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.338458 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.343656 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.343872 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.344158 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.344329 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.344490 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.345173 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.345377 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.345583 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.345795 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.345991 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.346427 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.346725 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.350245 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.343532 4619 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.351470 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.351505 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.351555 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.352222 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.352541 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.353256 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.353280 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.353358 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: E0126 10:55:19.353420 4619 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.353511 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.353522 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.353532 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: E0126 10:55:19.353546 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 10:55:19.853515659 +0000 UTC m=+18.887556575 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.353684 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.353716 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.354049 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.354102 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.354441 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.354454 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.354734 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.354770 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.354787 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.354840 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.355068 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.355164 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.355236 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.355280 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.355443 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.355475 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.355493 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.355537 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.355577 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.355669 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.355858 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.356035 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.356253 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.356320 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.356279 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.356525 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.356355 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.356663 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: E0126 10:55:19.356834 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 10:55:19 crc kubenswrapper[4619]: E0126 10:55:19.356857 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 10:55:19 crc kubenswrapper[4619]: E0126 10:55:19.356872 4619 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:55:19 crc kubenswrapper[4619]: E0126 10:55:19.356956 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 10:55:19.856932554 +0000 UTC m=+18.890973270 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:55:19 crc kubenswrapper[4619]: E0126 10:55:19.357230 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 10:55:19 crc kubenswrapper[4619]: E0126 10:55:19.357255 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 10:55:19 crc kubenswrapper[4619]: E0126 10:55:19.357269 4619 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:55:19 crc kubenswrapper[4619]: E0126 10:55:19.357328 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 10:55:19.857307724 +0000 UTC m=+18.891348440 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.357591 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.357741 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.357837 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.358424 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.358740 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.358827 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.359188 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.359462 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.359591 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.359638 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.359761 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.360835 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.361505 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.362193 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.362936 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.365381 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.365976 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.377572 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.382291 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.384796 4619 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8" exitCode=255 Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.384876 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8"} Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.386127 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.400332 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.405524 4619 scope.go:117] "RemoveContainer" containerID="5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.407086 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.407356 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436200 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436257 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436302 4619 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436318 4619 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436330 4619 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436343 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436353 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436361 4619 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436370 4619 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436378 4619 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436388 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436399 4619 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436411 4619 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436423 4619 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436434 4619 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436445 4619 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436456 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436465 4619 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436474 4619 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436484 4619 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436497 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436508 4619 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436519 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436531 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436543 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436555 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436568 4619 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436580 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436591 4619 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436602 4619 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436645 4619 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436659 4619 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436670 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436681 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436693 4619 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436704 4619 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436715 4619 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436727 4619 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436738 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436749 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436760 4619 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436771 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436782 4619 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436773 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436794 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436886 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436903 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436918 4619 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436932 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436947 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436959 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436969 4619 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436978 4619 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436990 4619 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437001 4619 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437011 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437020 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437029 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437038 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437048 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437057 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437066 4619 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437075 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437085 4619 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437094 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437104 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437112 4619 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437122 4619 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437132 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437141 4619 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437150 4619 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437160 4619 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437170 4619 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437180 4619 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437188 4619 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437197 4619 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437206 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437217 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437226 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437234 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437242 4619 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437251 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437261 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437269 4619 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437296 4619 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437305 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437313 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437325 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437333 4619 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437341 4619 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437350 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437359 4619 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437367 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437377 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437385 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437394 4619 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437402 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437411 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437420 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437428 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437438 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437446 4619 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437455 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437464 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437472 4619 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437480 4619 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437490 4619 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437522 4619 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437532 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437546 4619 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437554 4619 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437565 4619 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437575 4619 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437584 4619 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437593 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437604 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.437625 4619 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.436975 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.470225 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.486094 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.505033 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.523072 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.525223 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.539038 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.543209 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.543557 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 10:55:19 crc kubenswrapper[4619]: W0126 10:55:19.555174 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-b7b44304a744c6950ae6e6b73379861b940567d623716b4b283658c112ee7bb0 WatchSource:0}: Error finding container b7b44304a744c6950ae6e6b73379861b940567d623716b4b283658c112ee7bb0: Status 404 returned error can't find the container with id b7b44304a744c6950ae6e6b73379861b940567d623716b4b283658c112ee7bb0 Jan 26 10:55:19 crc kubenswrapper[4619]: W0126 10:55:19.559352 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-a610159f7b4b25c731152c33f1d440e2035008c39ec354532a5e39a3ec8bb196 WatchSource:0}: Error finding container a610159f7b4b25c731152c33f1d440e2035008c39ec354532a5e39a3ec8bb196: Status 404 returned error can't find the container with id a610159f7b4b25c731152c33f1d440e2035008c39ec354532a5e39a3ec8bb196 Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.841351 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.841537 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:19 crc kubenswrapper[4619]: E0126 10:55:19.841660 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:55:20.841600419 +0000 UTC m=+19.875641135 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:55:19 crc kubenswrapper[4619]: E0126 10:55:19.841756 4619 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 10:55:19 crc kubenswrapper[4619]: E0126 10:55:19.841867 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 10:55:20.841838226 +0000 UTC m=+19.875878982 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.942185 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.942662 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:55:19 crc kubenswrapper[4619]: I0126 10:55:19.942696 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:55:19 crc kubenswrapper[4619]: E0126 10:55:19.942402 4619 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 10:55:19 crc kubenswrapper[4619]: E0126 10:55:19.942893 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 10:55:20.942876562 +0000 UTC m=+19.976917278 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 10:55:19 crc kubenswrapper[4619]: E0126 10:55:19.943278 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 10:55:19 crc kubenswrapper[4619]: E0126 10:55:19.943297 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 10:55:19 crc kubenswrapper[4619]: E0126 10:55:19.943309 4619 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:55:19 crc kubenswrapper[4619]: E0126 10:55:19.943336 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 10:55:20.943328334 +0000 UTC m=+19.977369050 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:55:19 crc kubenswrapper[4619]: E0126 10:55:19.942833 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 10:55:19 crc kubenswrapper[4619]: E0126 10:55:19.943356 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 10:55:19 crc kubenswrapper[4619]: E0126 10:55:19.943365 4619 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:55:19 crc kubenswrapper[4619]: E0126 10:55:19.943392 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 10:55:20.943384165 +0000 UTC m=+19.977424881 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:55:20 crc kubenswrapper[4619]: I0126 10:55:20.222754 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 09:51:31.421053131 +0000 UTC Jan 26 10:55:20 crc kubenswrapper[4619]: I0126 10:55:20.389238 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a610159f7b4b25c731152c33f1d440e2035008c39ec354532a5e39a3ec8bb196"} Jan 26 10:55:20 crc kubenswrapper[4619]: I0126 10:55:20.391368 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916"} Jan 26 10:55:20 crc kubenswrapper[4619]: I0126 10:55:20.391517 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c"} Jan 26 10:55:20 crc kubenswrapper[4619]: I0126 10:55:20.391596 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b7b44304a744c6950ae6e6b73379861b940567d623716b4b283658c112ee7bb0"} Jan 26 10:55:20 crc kubenswrapper[4619]: I0126 10:55:20.393282 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec"} Jan 26 10:55:20 crc kubenswrapper[4619]: I0126 10:55:20.393382 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a792b5b8ff2ddc65d48b4c8c17f9fdc162dc1f023990c3c01c1abd6132907fad"} Jan 26 10:55:20 crc kubenswrapper[4619]: I0126 10:55:20.395773 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 10:55:20 crc kubenswrapper[4619]: I0126 10:55:20.397932 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30"} Jan 26 10:55:20 crc kubenswrapper[4619]: I0126 10:55:20.398301 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:55:20 crc kubenswrapper[4619]: I0126 10:55:20.416512 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:20 crc kubenswrapper[4619]: I0126 10:55:20.445150 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:20 crc kubenswrapper[4619]: I0126 10:55:20.462284 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:20 crc kubenswrapper[4619]: I0126 10:55:20.483998 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:20 crc kubenswrapper[4619]: I0126 10:55:20.506121 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:20 crc kubenswrapper[4619]: I0126 10:55:20.530075 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:20 crc kubenswrapper[4619]: I0126 10:55:20.549649 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:20 crc kubenswrapper[4619]: I0126 10:55:20.567738 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:20 crc kubenswrapper[4619]: I0126 10:55:20.585934 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:20 crc kubenswrapper[4619]: I0126 10:55:20.609302 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:20 crc kubenswrapper[4619]: I0126 10:55:20.629137 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:20 crc kubenswrapper[4619]: I0126 10:55:20.646085 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:20 crc kubenswrapper[4619]: I0126 10:55:20.661854 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:20 crc kubenswrapper[4619]: I0126 10:55:20.679001 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:20 crc kubenswrapper[4619]: I0126 10:55:20.851403 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:55:20 crc kubenswrapper[4619]: I0126 10:55:20.851545 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:20 crc kubenswrapper[4619]: E0126 10:55:20.851572 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:55:22.851551747 +0000 UTC m=+21.885592463 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:55:20 crc kubenswrapper[4619]: E0126 10:55:20.851681 4619 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 10:55:20 crc kubenswrapper[4619]: E0126 10:55:20.851741 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 10:55:22.851728681 +0000 UTC m=+21.885769407 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 10:55:20 crc kubenswrapper[4619]: I0126 10:55:20.952903 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:55:20 crc kubenswrapper[4619]: I0126 10:55:20.952961 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:55:20 crc kubenswrapper[4619]: I0126 10:55:20.952995 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:20 crc kubenswrapper[4619]: E0126 10:55:20.953087 4619 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 10:55:20 crc kubenswrapper[4619]: E0126 10:55:20.953141 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 10:55:20 crc kubenswrapper[4619]: E0126 10:55:20.953174 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 10:55:20 crc kubenswrapper[4619]: E0126 10:55:20.953185 4619 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:55:20 crc kubenswrapper[4619]: E0126 10:55:20.953161 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 10:55:22.953144318 +0000 UTC m=+21.987185044 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 10:55:20 crc kubenswrapper[4619]: E0126 10:55:20.953254 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 10:55:22.953237341 +0000 UTC m=+21.987278057 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:55:20 crc kubenswrapper[4619]: E0126 10:55:20.953314 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 10:55:20 crc kubenswrapper[4619]: E0126 10:55:20.953324 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 10:55:20 crc kubenswrapper[4619]: E0126 10:55:20.953344 4619 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:55:20 crc kubenswrapper[4619]: E0126 10:55:20.953388 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 10:55:22.953376945 +0000 UTC m=+21.987417661 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.223340 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 22:08:03.278324068 +0000 UTC Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.260722 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.260766 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:55:21 crc kubenswrapper[4619]: E0126 10:55:21.260877 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.260944 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:55:21 crc kubenswrapper[4619]: E0126 10:55:21.261057 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:55:21 crc kubenswrapper[4619]: E0126 10:55:21.261224 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.266185 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.266790 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.267704 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.268332 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.269032 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.269596 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.271780 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.272373 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.273560 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.274169 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.274761 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.276049 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.276576 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.277480 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.277984 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.278906 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.279486 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.279940 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.282948 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.283779 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.284538 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.285875 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.286422 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.287503 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:21Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.288348 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.288819 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.290193 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.291060 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.295887 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.296859 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.298095 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.298591 4619 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.298707 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.300410 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.301596 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.302047 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.303543 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.304527 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.305061 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.306089 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.306774 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.307570 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.308189 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.309134 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.309710 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.310561 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.311114 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.312088 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.312880 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.313867 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.314356 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.315422 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:21Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.315716 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.316236 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.316792 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.317661 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.329823 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:21Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.342008 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:21Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.353989 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:21Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.370041 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:21Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:21 crc kubenswrapper[4619]: I0126 10:55:21.385050 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:21Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.110687 4619 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.112861 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.113042 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.113142 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.113337 4619 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.121545 4619 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.121966 4619 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.123642 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.123764 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.123831 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.123933 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.124007 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:22Z","lastTransitionTime":"2026-01-26T10:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:22 crc kubenswrapper[4619]: E0126 10:55:22.157798 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:22Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.163674 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.163771 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.163788 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.163809 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.163823 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:22Z","lastTransitionTime":"2026-01-26T10:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:22 crc kubenswrapper[4619]: E0126 10:55:22.181368 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:22Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.186916 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.186981 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.187000 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.187029 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.187047 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:22Z","lastTransitionTime":"2026-01-26T10:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:22 crc kubenswrapper[4619]: E0126 10:55:22.213655 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:22Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.219599 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.219740 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.219768 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.219799 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.219823 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:22Z","lastTransitionTime":"2026-01-26T10:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.222519 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.223558 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 02:29:46.020246674 +0000 UTC Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.229287 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.235369 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 26 10:55:22 crc kubenswrapper[4619]: E0126 10:55:22.241867 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:22Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.247464 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.247527 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.247551 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.247585 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.247611 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:22Z","lastTransitionTime":"2026-01-26T10:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.248051 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:22Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:22 crc kubenswrapper[4619]: E0126 10:55:22.262990 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:22Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:22 crc kubenswrapper[4619]: E0126 10:55:22.263120 4619 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.264454 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:22Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.265374 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.265455 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.265483 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.265519 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.265562 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:22Z","lastTransitionTime":"2026-01-26T10:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.280241 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:22Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.299516 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:22Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.318891 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:22Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.340534 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:22Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.359250 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:22Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.368154 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.368273 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.368290 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.368310 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.368328 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:22Z","lastTransitionTime":"2026-01-26T10:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.374554 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:22Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.392008 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:22Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.410260 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:22Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.427315 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:22Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.442809 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:22Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.456669 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:22Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.471385 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:22Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.471756 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.471812 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.471824 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.471845 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.471860 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:22Z","lastTransitionTime":"2026-01-26T10:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.485366 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:22Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.575528 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.575566 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.575596 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.575641 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.575653 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:22Z","lastTransitionTime":"2026-01-26T10:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.678484 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.678668 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.678693 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.678722 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.678744 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:22Z","lastTransitionTime":"2026-01-26T10:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.781506 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.781545 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.781558 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.781572 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.781584 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:22Z","lastTransitionTime":"2026-01-26T10:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.870535 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.870739 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:22 crc kubenswrapper[4619]: E0126 10:55:22.870975 4619 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 10:55:22 crc kubenswrapper[4619]: E0126 10:55:22.871150 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:55:26.871105144 +0000 UTC m=+25.905145900 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:55:22 crc kubenswrapper[4619]: E0126 10:55:22.871247 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 10:55:26.871223107 +0000 UTC m=+25.905263833 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.885751 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.885870 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.885904 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.885950 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.886012 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:22Z","lastTransitionTime":"2026-01-26T10:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.972040 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.972152 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.972209 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:22 crc kubenswrapper[4619]: E0126 10:55:22.972332 4619 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 10:55:22 crc kubenswrapper[4619]: E0126 10:55:22.972351 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 10:55:22 crc kubenswrapper[4619]: E0126 10:55:22.972410 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 10:55:22 crc kubenswrapper[4619]: E0126 10:55:22.972430 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 10:55:26.972407088 +0000 UTC m=+26.006447844 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 10:55:22 crc kubenswrapper[4619]: E0126 10:55:22.972433 4619 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:55:22 crc kubenswrapper[4619]: E0126 10:55:22.972547 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 10:55:26.97251484 +0000 UTC m=+26.006555566 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:55:22 crc kubenswrapper[4619]: E0126 10:55:22.972562 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 10:55:22 crc kubenswrapper[4619]: E0126 10:55:22.972663 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 10:55:22 crc kubenswrapper[4619]: E0126 10:55:22.972697 4619 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:55:22 crc kubenswrapper[4619]: E0126 10:55:22.972833 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 10:55:26.972789168 +0000 UTC m=+26.006830074 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.989523 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.989582 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.989597 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.989652 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:22 crc kubenswrapper[4619]: I0126 10:55:22.989664 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:22Z","lastTransitionTime":"2026-01-26T10:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.093120 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.093192 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.093207 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.093234 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.093254 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:23Z","lastTransitionTime":"2026-01-26T10:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.195955 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.195989 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.195997 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.196011 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.196020 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:23Z","lastTransitionTime":"2026-01-26T10:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.223967 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 17:55:55.335919267 +0000 UTC Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.260597 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:55:23 crc kubenswrapper[4619]: E0126 10:55:23.260813 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.261303 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:55:23 crc kubenswrapper[4619]: E0126 10:55:23.261372 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.261442 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:23 crc kubenswrapper[4619]: E0126 10:55:23.261530 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.298933 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.298968 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.298980 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.298997 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.299008 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:23Z","lastTransitionTime":"2026-01-26T10:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.401812 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.401856 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.401871 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.401891 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.401909 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:23Z","lastTransitionTime":"2026-01-26T10:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.411317 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1"} Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.461244 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:23Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.476255 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:23Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.489939 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:23Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.513327 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:23Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.515449 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.515500 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.515512 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.515532 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.515550 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:23Z","lastTransitionTime":"2026-01-26T10:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.548635 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:23Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.567131 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:23Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.571335 4619 csr.go:261] certificate signing request csr-jdl85 is approved, waiting to be issued Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.594936 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:23Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.617852 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.617880 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.617890 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.617907 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.617919 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:23Z","lastTransitionTime":"2026-01-26T10:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.635651 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:23Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.680099 4619 csr.go:257] certificate signing request csr-jdl85 is issued Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.720430 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.720469 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.720480 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.720501 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.720513 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:23Z","lastTransitionTime":"2026-01-26T10:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.821943 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-v22hs"] Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.822284 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-v22hs" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.823097 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.823140 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.823149 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.823168 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.823181 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:23Z","lastTransitionTime":"2026-01-26T10:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.826086 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.826286 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.826400 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.846751 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:23Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.879719 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhvz7\" (UniqueName: \"kubernetes.io/projected/bd5a1e1f-e05a-4fec-82df-3491fad4b710-kube-api-access-zhvz7\") pod \"node-resolver-v22hs\" (UID: \"bd5a1e1f-e05a-4fec-82df-3491fad4b710\") " pod="openshift-dns/node-resolver-v22hs" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.879765 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/bd5a1e1f-e05a-4fec-82df-3491fad4b710-hosts-file\") pod \"node-resolver-v22hs\" (UID: \"bd5a1e1f-e05a-4fec-82df-3491fad4b710\") " pod="openshift-dns/node-resolver-v22hs" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.880614 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:23Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.905174 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:23Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.925642 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.925711 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.925728 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.925759 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.925774 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:23Z","lastTransitionTime":"2026-01-26T10:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.969117 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:23Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.980589 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhvz7\" (UniqueName: \"kubernetes.io/projected/bd5a1e1f-e05a-4fec-82df-3491fad4b710-kube-api-access-zhvz7\") pod \"node-resolver-v22hs\" (UID: \"bd5a1e1f-e05a-4fec-82df-3491fad4b710\") " pod="openshift-dns/node-resolver-v22hs" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.980694 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/bd5a1e1f-e05a-4fec-82df-3491fad4b710-hosts-file\") pod \"node-resolver-v22hs\" (UID: \"bd5a1e1f-e05a-4fec-82df-3491fad4b710\") " pod="openshift-dns/node-resolver-v22hs" Jan 26 10:55:23 crc kubenswrapper[4619]: I0126 10:55:23.980808 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/bd5a1e1f-e05a-4fec-82df-3491fad4b710-hosts-file\") pod \"node-resolver-v22hs\" (UID: \"bd5a1e1f-e05a-4fec-82df-3491fad4b710\") " pod="openshift-dns/node-resolver-v22hs" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.014568 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhvz7\" (UniqueName: \"kubernetes.io/projected/bd5a1e1f-e05a-4fec-82df-3491fad4b710-kube-api-access-zhvz7\") pod \"node-resolver-v22hs\" (UID: \"bd5a1e1f-e05a-4fec-82df-3491fad4b710\") " pod="openshift-dns/node-resolver-v22hs" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.029579 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.029859 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.029969 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.030127 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.030235 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:24Z","lastTransitionTime":"2026-01-26T10:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.036585 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:24Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.097646 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:24Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.125184 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:24Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.132584 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.132650 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.132665 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.132694 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.132704 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:24Z","lastTransitionTime":"2026-01-26T10:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.135331 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-v22hs" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.161098 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:24Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.185103 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:24Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.224669 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 10:16:05.196773366 +0000 UTC Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.239058 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.239092 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.239103 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.239135 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.239147 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:24Z","lastTransitionTime":"2026-01-26T10:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.348443 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.348957 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.348969 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.348984 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.348995 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:24Z","lastTransitionTime":"2026-01-26T10:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.415871 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-v22hs" event={"ID":"bd5a1e1f-e05a-4fec-82df-3491fad4b710","Type":"ContainerStarted","Data":"e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb"} Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.415916 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-v22hs" event={"ID":"bd5a1e1f-e05a-4fec-82df-3491fad4b710","Type":"ContainerStarted","Data":"507213a65765c1825b6339289304801733baa6b71602fe3a54b098d67d3d905f"} Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.437266 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:24Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.452850 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.453091 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.453398 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.453567 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.453764 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:24Z","lastTransitionTime":"2026-01-26T10:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.452939 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:24Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.468247 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:24Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.485072 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:24Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.496183 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:24Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.519346 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:24Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.545674 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:24Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.556176 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.556220 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.556247 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.556264 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.556273 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:24Z","lastTransitionTime":"2026-01-26T10:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.560967 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:24Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.574919 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:24Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.659087 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.659146 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.659157 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.659177 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.659189 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:24Z","lastTransitionTime":"2026-01-26T10:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.673780 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-28hd4"] Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.674963 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.680924 4619 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-26 10:50:23 +0000 UTC, rotation deadline is 2026-11-05 00:57:52.603201014 +0000 UTC Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.680967 4619 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6782h2m27.922237415s for next certificate rotation Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.681099 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.681285 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.681736 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.682333 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.686740 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-684hz"] Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.687132 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.687491 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-5j9c8"] Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.688246 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.688264 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.693086 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.695138 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.695655 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.695672 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.696761 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.696935 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.698048 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.706377 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:24Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.729678 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:24Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.761706 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.762035 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.762096 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.762156 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.762301 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:24Z","lastTransitionTime":"2026-01-26T10:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.787079 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e3c5f7d0-80be-4cd1-8700-edae2eb1a04a-cni-binary-copy\") pod \"multus-additional-cni-plugins-5j9c8\" (UID: \"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\") " pod="openshift-multus/multus-additional-cni-plugins-5j9c8" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.787156 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-host-run-multus-certs\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.787179 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-system-cni-dir\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.787200 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-host-run-k8s-cni-cncf-io\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.787227 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8aab93f8-6555-4389-b15c-9af458caa339-multus-daemon-config\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.787400 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-etc-kubernetes\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.787497 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f33a41bb-6406-4c73-8024-4acd72817832-rootfs\") pod \"machine-config-daemon-28hd4\" (UID: \"f33a41bb-6406-4c73-8024-4acd72817832\") " pod="openshift-machine-config-operator/machine-config-daemon-28hd4" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.787531 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f33a41bb-6406-4c73-8024-4acd72817832-proxy-tls\") pod \"machine-config-daemon-28hd4\" (UID: \"f33a41bb-6406-4c73-8024-4acd72817832\") " pod="openshift-machine-config-operator/machine-config-daemon-28hd4" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.787559 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e3c5f7d0-80be-4cd1-8700-edae2eb1a04a-cnibin\") pod \"multus-additional-cni-plugins-5j9c8\" (UID: \"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\") " pod="openshift-multus/multus-additional-cni-plugins-5j9c8" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.787615 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-host-var-lib-cni-multus\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.787658 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-host-var-lib-cni-bin\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.787681 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e3c5f7d0-80be-4cd1-8700-edae2eb1a04a-os-release\") pod \"multus-additional-cni-plugins-5j9c8\" (UID: \"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\") " pod="openshift-multus/multus-additional-cni-plugins-5j9c8" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.787713 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-multus-cni-dir\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.787730 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e3c5f7d0-80be-4cd1-8700-edae2eb1a04a-system-cni-dir\") pod \"multus-additional-cni-plugins-5j9c8\" (UID: \"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\") " pod="openshift-multus/multus-additional-cni-plugins-5j9c8" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.787759 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-os-release\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.787779 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8aab93f8-6555-4389-b15c-9af458caa339-cni-binary-copy\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.787958 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f33a41bb-6406-4c73-8024-4acd72817832-mcd-auth-proxy-config\") pod \"machine-config-daemon-28hd4\" (UID: \"f33a41bb-6406-4c73-8024-4acd72817832\") " pod="openshift-machine-config-operator/machine-config-daemon-28hd4" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.788031 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-multus-socket-dir-parent\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.788059 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e3c5f7d0-80be-4cd1-8700-edae2eb1a04a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5j9c8\" (UID: \"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\") " pod="openshift-multus/multus-additional-cni-plugins-5j9c8" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.788091 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk9lh\" (UniqueName: \"kubernetes.io/projected/f33a41bb-6406-4c73-8024-4acd72817832-kube-api-access-rk9lh\") pod \"machine-config-daemon-28hd4\" (UID: \"f33a41bb-6406-4c73-8024-4acd72817832\") " pod="openshift-machine-config-operator/machine-config-daemon-28hd4" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.788169 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e3c5f7d0-80be-4cd1-8700-edae2eb1a04a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5j9c8\" (UID: \"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\") " pod="openshift-multus/multus-additional-cni-plugins-5j9c8" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.788234 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-multus-conf-dir\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.788258 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9fvx\" (UniqueName: \"kubernetes.io/projected/e3c5f7d0-80be-4cd1-8700-edae2eb1a04a-kube-api-access-s9fvx\") pod \"multus-additional-cni-plugins-5j9c8\" (UID: \"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\") " pod="openshift-multus/multus-additional-cni-plugins-5j9c8" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.788291 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvrcn\" (UniqueName: \"kubernetes.io/projected/8aab93f8-6555-4389-b15c-9af458caa339-kube-api-access-tvrcn\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.788319 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-cnibin\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.788337 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-host-var-lib-kubelet\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.788361 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-hostroot\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.788398 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-host-run-netns\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.793048 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:24Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.822047 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:24Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.864899 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.864947 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.864957 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.864975 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.864986 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:24Z","lastTransitionTime":"2026-01-26T10:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.870317 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:24Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.889558 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f33a41bb-6406-4c73-8024-4acd72817832-rootfs\") pod \"machine-config-daemon-28hd4\" (UID: \"f33a41bb-6406-4c73-8024-4acd72817832\") " pod="openshift-machine-config-operator/machine-config-daemon-28hd4" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.889916 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f33a41bb-6406-4c73-8024-4acd72817832-proxy-tls\") pod \"machine-config-daemon-28hd4\" (UID: \"f33a41bb-6406-4c73-8024-4acd72817832\") " pod="openshift-machine-config-operator/machine-config-daemon-28hd4" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.889753 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f33a41bb-6406-4c73-8024-4acd72817832-rootfs\") pod \"machine-config-daemon-28hd4\" (UID: \"f33a41bb-6406-4c73-8024-4acd72817832\") " pod="openshift-machine-config-operator/machine-config-daemon-28hd4" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.889988 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e3c5f7d0-80be-4cd1-8700-edae2eb1a04a-cnibin\") pod \"multus-additional-cni-plugins-5j9c8\" (UID: \"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\") " pod="openshift-multus/multus-additional-cni-plugins-5j9c8" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.890092 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-host-var-lib-cni-multus\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.890138 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-host-var-lib-cni-bin\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.890158 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e3c5f7d0-80be-4cd1-8700-edae2eb1a04a-os-release\") pod \"multus-additional-cni-plugins-5j9c8\" (UID: \"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\") " pod="openshift-multus/multus-additional-cni-plugins-5j9c8" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.890179 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-multus-cni-dir\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.890198 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e3c5f7d0-80be-4cd1-8700-edae2eb1a04a-system-cni-dir\") pod \"multus-additional-cni-plugins-5j9c8\" (UID: \"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\") " pod="openshift-multus/multus-additional-cni-plugins-5j9c8" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.890229 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-os-release\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.890247 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8aab93f8-6555-4389-b15c-9af458caa339-cni-binary-copy\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.890264 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-host-var-lib-cni-multus\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.890284 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f33a41bb-6406-4c73-8024-4acd72817832-mcd-auth-proxy-config\") pod \"machine-config-daemon-28hd4\" (UID: \"f33a41bb-6406-4c73-8024-4acd72817832\") " pod="openshift-machine-config-operator/machine-config-daemon-28hd4" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.890365 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-multus-socket-dir-parent\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.890373 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e3c5f7d0-80be-4cd1-8700-edae2eb1a04a-os-release\") pod \"multus-additional-cni-plugins-5j9c8\" (UID: \"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\") " pod="openshift-multus/multus-additional-cni-plugins-5j9c8" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.890882 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e3c5f7d0-80be-4cd1-8700-edae2eb1a04a-cnibin\") pod \"multus-additional-cni-plugins-5j9c8\" (UID: \"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\") " pod="openshift-multus/multus-additional-cni-plugins-5j9c8" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.890396 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-multus-cni-dir\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.890966 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-host-var-lib-cni-bin\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.890385 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e3c5f7d0-80be-4cd1-8700-edae2eb1a04a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5j9c8\" (UID: \"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\") " pod="openshift-multus/multus-additional-cni-plugins-5j9c8" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.891316 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f33a41bb-6406-4c73-8024-4acd72817832-mcd-auth-proxy-config\") pod \"machine-config-daemon-28hd4\" (UID: \"f33a41bb-6406-4c73-8024-4acd72817832\") " pod="openshift-machine-config-operator/machine-config-daemon-28hd4" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.891401 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk9lh\" (UniqueName: \"kubernetes.io/projected/f33a41bb-6406-4c73-8024-4acd72817832-kube-api-access-rk9lh\") pod \"machine-config-daemon-28hd4\" (UID: \"f33a41bb-6406-4c73-8024-4acd72817832\") " pod="openshift-machine-config-operator/machine-config-daemon-28hd4" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.891431 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e3c5f7d0-80be-4cd1-8700-edae2eb1a04a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5j9c8\" (UID: \"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\") " pod="openshift-multus/multus-additional-cni-plugins-5j9c8" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.891716 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8aab93f8-6555-4389-b15c-9af458caa339-cni-binary-copy\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.891798 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-os-release\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.891810 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e3c5f7d0-80be-4cd1-8700-edae2eb1a04a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5j9c8\" (UID: \"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\") " pod="openshift-multus/multus-additional-cni-plugins-5j9c8" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.891828 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e3c5f7d0-80be-4cd1-8700-edae2eb1a04a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5j9c8\" (UID: \"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\") " pod="openshift-multus/multus-additional-cni-plugins-5j9c8" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.891832 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e3c5f7d0-80be-4cd1-8700-edae2eb1a04a-system-cni-dir\") pod \"multus-additional-cni-plugins-5j9c8\" (UID: \"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\") " pod="openshift-multus/multus-additional-cni-plugins-5j9c8" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.891860 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-multus-conf-dir\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.891893 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9fvx\" (UniqueName: \"kubernetes.io/projected/e3c5f7d0-80be-4cd1-8700-edae2eb1a04a-kube-api-access-s9fvx\") pod \"multus-additional-cni-plugins-5j9c8\" (UID: \"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\") " pod="openshift-multus/multus-additional-cni-plugins-5j9c8" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.891931 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvrcn\" (UniqueName: \"kubernetes.io/projected/8aab93f8-6555-4389-b15c-9af458caa339-kube-api-access-tvrcn\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.891946 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-multus-socket-dir-parent\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.891978 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-multus-conf-dir\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.892957 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-cnibin\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.893021 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-host-var-lib-kubelet\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.893044 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-hostroot\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.893094 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-host-run-netns\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.893130 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-host-run-multus-certs\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.893153 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e3c5f7d0-80be-4cd1-8700-edae2eb1a04a-cni-binary-copy\") pod \"multus-additional-cni-plugins-5j9c8\" (UID: \"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\") " pod="openshift-multus/multus-additional-cni-plugins-5j9c8" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.893197 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-system-cni-dir\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.893224 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-host-run-k8s-cni-cncf-io\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.893245 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8aab93f8-6555-4389-b15c-9af458caa339-multus-daemon-config\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.893267 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-etc-kubernetes\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.893347 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-etc-kubernetes\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.893374 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-hostroot\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.893404 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-cnibin\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.893423 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-host-var-lib-kubelet\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.893464 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-host-run-netns\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.893488 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-host-run-multus-certs\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.893980 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e3c5f7d0-80be-4cd1-8700-edae2eb1a04a-cni-binary-copy\") pod \"multus-additional-cni-plugins-5j9c8\" (UID: \"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\") " pod="openshift-multus/multus-additional-cni-plugins-5j9c8" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.894053 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-system-cni-dir\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.894088 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8aab93f8-6555-4389-b15c-9af458caa339-host-run-k8s-cni-cncf-io\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.894549 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8aab93f8-6555-4389-b15c-9af458caa339-multus-daemon-config\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.896780 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f33a41bb-6406-4c73-8024-4acd72817832-proxy-tls\") pod \"machine-config-daemon-28hd4\" (UID: \"f33a41bb-6406-4c73-8024-4acd72817832\") " pod="openshift-machine-config-operator/machine-config-daemon-28hd4" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.906251 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:24Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.920199 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9fvx\" (UniqueName: \"kubernetes.io/projected/e3c5f7d0-80be-4cd1-8700-edae2eb1a04a-kube-api-access-s9fvx\") pod \"multus-additional-cni-plugins-5j9c8\" (UID: \"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\") " pod="openshift-multus/multus-additional-cni-plugins-5j9c8" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.938020 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvrcn\" (UniqueName: \"kubernetes.io/projected/8aab93f8-6555-4389-b15c-9af458caa339-kube-api-access-tvrcn\") pod \"multus-684hz\" (UID: \"8aab93f8-6555-4389-b15c-9af458caa339\") " pod="openshift-multus/multus-684hz" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.940056 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk9lh\" (UniqueName: \"kubernetes.io/projected/f33a41bb-6406-4c73-8024-4acd72817832-kube-api-access-rk9lh\") pod \"machine-config-daemon-28hd4\" (UID: \"f33a41bb-6406-4c73-8024-4acd72817832\") " pod="openshift-machine-config-operator/machine-config-daemon-28hd4" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.948323 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:24Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.967678 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.967955 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.968037 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.968133 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.968193 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:24Z","lastTransitionTime":"2026-01-26T10:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.971021 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:24Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.993272 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:24Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:24 crc kubenswrapper[4619]: I0126 10:55:24.997394 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.004284 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-684hz" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.010408 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.029812 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: W0126 10:55:25.051173 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8aab93f8_6555_4389_b15c_9af458caa339.slice/crio-63da723e811799abb1a8087bd1671cab9f1f092249953ed0cbe67c463a923e52 WatchSource:0}: Error finding container 63da723e811799abb1a8087bd1671cab9f1f092249953ed0cbe67c463a923e52: Status 404 returned error can't find the container with id 63da723e811799abb1a8087bd1671cab9f1f092249953ed0cbe67c463a923e52 Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.063360 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.076887 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.076926 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.076935 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.076951 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.076963 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:25Z","lastTransitionTime":"2026-01-26T10:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.084344 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.106472 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.113069 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-b6xtv"] Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.114196 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.116609 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.117953 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.121165 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.121452 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.121708 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.121883 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.122005 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.143061 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.188364 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.188418 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.188431 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.188450 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.188461 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:25Z","lastTransitionTime":"2026-01-26T10:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.198134 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.198380 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-run-systemd\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.198401 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-run-openvswitch\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.198427 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kls7n\" (UniqueName: \"kubernetes.io/projected/9ed93d0d-0709-4425-b378-6b8a15318070-kube-api-access-kls7n\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.198449 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-run-ovn\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.198466 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9ed93d0d-0709-4425-b378-6b8a15318070-ovn-node-metrics-cert\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.198485 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-kubelet\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.198534 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-log-socket\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.198558 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9ed93d0d-0709-4425-b378-6b8a15318070-ovnkube-script-lib\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.198576 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-cni-bin\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.198773 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-var-lib-openvswitch\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.198847 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9ed93d0d-0709-4425-b378-6b8a15318070-env-overrides\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.198870 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-slash\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.198889 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-node-log\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.198905 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.198931 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-run-ovn-kubernetes\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.198970 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9ed93d0d-0709-4425-b378-6b8a15318070-ovnkube-config\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.198983 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-systemd-units\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.199012 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-etc-openvswitch\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.199051 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-run-netns\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.199069 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-cni-netd\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.225085 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 08:09:27.511131858 +0000 UTC Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.233810 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.265000 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.265508 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.265576 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:55:25 crc kubenswrapper[4619]: E0126 10:55:25.265669 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:55:25 crc kubenswrapper[4619]: E0126 10:55:25.265760 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.265922 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:55:25 crc kubenswrapper[4619]: E0126 10:55:25.265998 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.281993 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.293230 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.293294 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.293305 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.293323 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.293337 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:25Z","lastTransitionTime":"2026-01-26T10:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.298298 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.299707 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-kubelet\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.299815 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-run-ovn\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.299888 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-run-ovn\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.299891 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9ed93d0d-0709-4425-b378-6b8a15318070-ovn-node-metrics-cert\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.299978 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9ed93d0d-0709-4425-b378-6b8a15318070-ovnkube-script-lib\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.300027 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-log-socket\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.300047 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-cni-bin\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.300404 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-var-lib-openvswitch\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.300421 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9ed93d0d-0709-4425-b378-6b8a15318070-env-overrides\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.300438 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.300459 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-slash\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.300473 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-node-log\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.300491 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-run-ovn-kubernetes\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.300524 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9ed93d0d-0709-4425-b378-6b8a15318070-ovnkube-config\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.300540 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-systemd-units\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.300560 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-etc-openvswitch\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.299848 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-kubelet\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.300631 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-run-netns\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.300592 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-run-netns\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.300662 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-node-log\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.300692 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-cni-netd\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.300696 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-run-ovn-kubernetes\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.300686 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-slash\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.300728 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-run-systemd\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.300759 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-run-systemd\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.300780 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-run-openvswitch\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.300802 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-systemd-units\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.300821 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kls7n\" (UniqueName: \"kubernetes.io/projected/9ed93d0d-0709-4425-b378-6b8a15318070-kube-api-access-kls7n\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.300831 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-etc-openvswitch\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.300784 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-cni-netd\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.301230 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-cni-bin\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.301190 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.301286 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-log-socket\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.301296 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-var-lib-openvswitch\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.301203 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-run-openvswitch\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.302105 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9ed93d0d-0709-4425-b378-6b8a15318070-ovnkube-config\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.302138 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9ed93d0d-0709-4425-b378-6b8a15318070-ovnkube-script-lib\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.302646 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9ed93d0d-0709-4425-b378-6b8a15318070-env-overrides\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.309366 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9ed93d0d-0709-4425-b378-6b8a15318070-ovn-node-metrics-cert\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.324779 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kls7n\" (UniqueName: \"kubernetes.io/projected/9ed93d0d-0709-4425-b378-6b8a15318070-kube-api-access-kls7n\") pod \"ovnkube-node-b6xtv\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.334826 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.360721 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.374496 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.390683 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.395384 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.395437 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.395450 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.395470 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.395483 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:25Z","lastTransitionTime":"2026-01-26T10:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.406701 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.425798 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-684hz" event={"ID":"8aab93f8-6555-4389-b15c-9af458caa339","Type":"ContainerStarted","Data":"31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05"} Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.426139 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-684hz" event={"ID":"8aab93f8-6555-4389-b15c-9af458caa339","Type":"ContainerStarted","Data":"63da723e811799abb1a8087bd1671cab9f1f092249953ed0cbe67c463a923e52"} Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.427298 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerStarted","Data":"ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf"} Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.427350 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerStarted","Data":"955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd"} Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.427362 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerStarted","Data":"c814a7ca09d1dfc0000ad551883908a79741ba9af19a5f38ab36fa2bef7697e1"} Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.429890 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.430977 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.441032 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" event={"ID":"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a","Type":"ContainerStarted","Data":"32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec"} Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.441096 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" event={"ID":"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a","Type":"ContainerStarted","Data":"30569e30de458a9faf8854a9da1d82618ce41eac76fa64219547e86039d11fd8"} Jan 26 10:55:25 crc kubenswrapper[4619]: W0126 10:55:25.443831 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ed93d0d_0709_4425_b378_6b8a15318070.slice/crio-9c1a4d96df40284be58e1e9e411ed2c16033f6ffb216d751d60d1a69308707c0 WatchSource:0}: Error finding container 9c1a4d96df40284be58e1e9e411ed2c16033f6ffb216d751d60d1a69308707c0: Status 404 returned error can't find the container with id 9c1a4d96df40284be58e1e9e411ed2c16033f6ffb216d751d60d1a69308707c0 Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.447070 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.472040 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.485823 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.497939 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.498157 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.498239 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.498352 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.498412 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:25Z","lastTransitionTime":"2026-01-26T10:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.501003 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.517827 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.536993 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.551680 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.562798 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.575911 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.593643 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.602916 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.602958 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.602969 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.602988 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.603018 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:25Z","lastTransitionTime":"2026-01-26T10:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.622001 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.644828 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.659049 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.669663 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.681794 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.695768 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.705267 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.705319 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.705330 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.705350 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.705363 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:25Z","lastTransitionTime":"2026-01-26T10:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.727534 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.741371 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.752837 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.766436 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.790217 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.805052 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.809764 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.809836 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.809853 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.809879 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.809896 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:25Z","lastTransitionTime":"2026-01-26T10:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.819924 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:25Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.912459 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.912511 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.912522 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.912543 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:25 crc kubenswrapper[4619]: I0126 10:55:25.912557 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:25Z","lastTransitionTime":"2026-01-26T10:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.023834 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.023888 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.023898 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.023919 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.023929 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:26Z","lastTransitionTime":"2026-01-26T10:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.126924 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.126972 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.126982 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.127003 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.127016 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:26Z","lastTransitionTime":"2026-01-26T10:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.226090 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 21:49:07.252213704 +0000 UTC Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.230280 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.230313 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.230325 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.230341 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.230351 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:26Z","lastTransitionTime":"2026-01-26T10:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.332654 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.332684 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.332693 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.332710 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.332721 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:26Z","lastTransitionTime":"2026-01-26T10:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.435800 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.435855 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.435867 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.435890 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.435902 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:26Z","lastTransitionTime":"2026-01-26T10:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.445703 4619 generic.go:334] "Generic (PLEG): container finished" podID="9ed93d0d-0709-4425-b378-6b8a15318070" containerID="04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139" exitCode=0 Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.445774 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" event={"ID":"9ed93d0d-0709-4425-b378-6b8a15318070","Type":"ContainerDied","Data":"04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139"} Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.445810 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" event={"ID":"9ed93d0d-0709-4425-b378-6b8a15318070","Type":"ContainerStarted","Data":"9c1a4d96df40284be58e1e9e411ed2c16033f6ffb216d751d60d1a69308707c0"} Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.448325 4619 generic.go:334] "Generic (PLEG): container finished" podID="e3c5f7d0-80be-4cd1-8700-edae2eb1a04a" containerID="32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec" exitCode=0 Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.448838 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" event={"ID":"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a","Type":"ContainerDied","Data":"32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec"} Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.462010 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:26Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.486710 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:26Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.501660 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:26Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.518788 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:26Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.534138 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:26Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.539732 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.539790 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.539802 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.539821 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.539834 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:26Z","lastTransitionTime":"2026-01-26T10:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.553292 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:26Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.564052 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:26Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.574026 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:26Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.596948 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:26Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.610974 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:26Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.624244 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:26Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.637037 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:26Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.646308 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.646574 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.646583 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.646598 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.646609 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:26Z","lastTransitionTime":"2026-01-26T10:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.654295 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:26Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.667282 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:26Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.681758 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:26Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.696198 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:26Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.714740 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:26Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.729511 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:26Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.743181 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:26Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.759380 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.759423 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.759434 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.759453 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.759466 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:26Z","lastTransitionTime":"2026-01-26T10:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.763688 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:26Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.783978 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:26Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.797017 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:26Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.820963 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:26Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.841847 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:26Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.855755 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:26Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.862539 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.862589 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.862612 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.862653 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.862668 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:26Z","lastTransitionTime":"2026-01-26T10:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.930727 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.930882 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:26 crc kubenswrapper[4619]: E0126 10:55:26.930968 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:55:34.930944567 +0000 UTC m=+33.964985283 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:55:26 crc kubenswrapper[4619]: E0126 10:55:26.931057 4619 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 10:55:26 crc kubenswrapper[4619]: E0126 10:55:26.931185 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 10:55:34.931158213 +0000 UTC m=+33.965199079 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.935469 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:26Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.965601 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.965655 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.965666 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.965696 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:26 crc kubenswrapper[4619]: I0126 10:55:26.965710 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:26Z","lastTransitionTime":"2026-01-26T10:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.032246 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.032309 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.032340 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:55:27 crc kubenswrapper[4619]: E0126 10:55:27.032512 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 10:55:27 crc kubenswrapper[4619]: E0126 10:55:27.032563 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 10:55:27 crc kubenswrapper[4619]: E0126 10:55:27.032580 4619 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:55:27 crc kubenswrapper[4619]: E0126 10:55:27.032658 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 10:55:35.032636241 +0000 UTC m=+34.066676957 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:55:27 crc kubenswrapper[4619]: E0126 10:55:27.033117 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 10:55:27 crc kubenswrapper[4619]: E0126 10:55:27.033145 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 10:55:27 crc kubenswrapper[4619]: E0126 10:55:27.033156 4619 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:55:27 crc kubenswrapper[4619]: E0126 10:55:27.033188 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 10:55:35.033179086 +0000 UTC m=+34.067219802 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:55:27 crc kubenswrapper[4619]: E0126 10:55:27.033256 4619 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 10:55:27 crc kubenswrapper[4619]: E0126 10:55:27.033286 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 10:55:35.033277189 +0000 UTC m=+34.067317905 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.071792 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.071838 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.071847 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.071864 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.071876 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:27Z","lastTransitionTime":"2026-01-26T10:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.175667 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.176070 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.176085 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.176103 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.176117 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:27Z","lastTransitionTime":"2026-01-26T10:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.229977 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 16:58:33.38353424 +0000 UTC Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.260257 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:55:27 crc kubenswrapper[4619]: E0126 10:55:27.260379 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.260257 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:27 crc kubenswrapper[4619]: E0126 10:55:27.260577 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.260419 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:55:27 crc kubenswrapper[4619]: E0126 10:55:27.260686 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.285829 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.285876 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.285887 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.285907 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.285918 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:27Z","lastTransitionTime":"2026-01-26T10:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.391431 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.391473 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.391485 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.391506 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.391518 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:27Z","lastTransitionTime":"2026-01-26T10:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.459364 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" event={"ID":"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a","Type":"ContainerStarted","Data":"a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc"} Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.463191 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" event={"ID":"9ed93d0d-0709-4425-b378-6b8a15318070","Type":"ContainerStarted","Data":"732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e"} Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.463393 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" event={"ID":"9ed93d0d-0709-4425-b378-6b8a15318070","Type":"ContainerStarted","Data":"67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f"} Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.463486 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" event={"ID":"9ed93d0d-0709-4425-b378-6b8a15318070","Type":"ContainerStarted","Data":"cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958"} Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.463567 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" event={"ID":"9ed93d0d-0709-4425-b378-6b8a15318070","Type":"ContainerStarted","Data":"a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8"} Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.463697 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" event={"ID":"9ed93d0d-0709-4425-b378-6b8a15318070","Type":"ContainerStarted","Data":"8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e"} Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.477217 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:27Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.493945 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.493992 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.494005 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.494025 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.494044 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:27Z","lastTransitionTime":"2026-01-26T10:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.502862 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:27Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.517259 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:27Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.565096 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:27Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.586890 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:27Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.596486 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.596535 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.596544 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.596564 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.596574 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:27Z","lastTransitionTime":"2026-01-26T10:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.604415 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:27Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.616044 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:27Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.631320 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:27Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.651356 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:27Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.669393 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:27Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.683077 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:27Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.696293 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:27Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.699300 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.699364 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.699375 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.699395 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.699407 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:27Z","lastTransitionTime":"2026-01-26T10:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.710149 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:27Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.801475 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.801789 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.801873 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.802023 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.802102 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:27Z","lastTransitionTime":"2026-01-26T10:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.826692 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-fzj46"] Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.827122 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-fzj46" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.830187 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.830524 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.830752 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.830971 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.842984 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b491a22b-b179-42a8-bebd-4dfc7ae4cb71-host\") pod \"node-ca-fzj46\" (UID: \"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\") " pod="openshift-image-registry/node-ca-fzj46" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.843058 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjncm\" (UniqueName: \"kubernetes.io/projected/b491a22b-b179-42a8-bebd-4dfc7ae4cb71-kube-api-access-hjncm\") pod \"node-ca-fzj46\" (UID: \"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\") " pod="openshift-image-registry/node-ca-fzj46" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.843080 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b491a22b-b179-42a8-bebd-4dfc7ae4cb71-serviceca\") pod \"node-ca-fzj46\" (UID: \"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\") " pod="openshift-image-registry/node-ca-fzj46" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.869409 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:27Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.886900 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:27Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.905180 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.905232 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.905243 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.905261 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.905274 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:27Z","lastTransitionTime":"2026-01-26T10:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.912028 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:27Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.927697 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:27Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.940477 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:27Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.943613 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b491a22b-b179-42a8-bebd-4dfc7ae4cb71-serviceca\") pod \"node-ca-fzj46\" (UID: \"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\") " pod="openshift-image-registry/node-ca-fzj46" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.943680 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b491a22b-b179-42a8-bebd-4dfc7ae4cb71-host\") pod \"node-ca-fzj46\" (UID: \"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\") " pod="openshift-image-registry/node-ca-fzj46" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.943724 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjncm\" (UniqueName: \"kubernetes.io/projected/b491a22b-b179-42a8-bebd-4dfc7ae4cb71-kube-api-access-hjncm\") pod \"node-ca-fzj46\" (UID: \"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\") " pod="openshift-image-registry/node-ca-fzj46" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.943962 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b491a22b-b179-42a8-bebd-4dfc7ae4cb71-host\") pod \"node-ca-fzj46\" (UID: \"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\") " pod="openshift-image-registry/node-ca-fzj46" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.945084 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b491a22b-b179-42a8-bebd-4dfc7ae4cb71-serviceca\") pod \"node-ca-fzj46\" (UID: \"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\") " pod="openshift-image-registry/node-ca-fzj46" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.957074 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:27Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.967682 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjncm\" (UniqueName: \"kubernetes.io/projected/b491a22b-b179-42a8-bebd-4dfc7ae4cb71-kube-api-access-hjncm\") pod \"node-ca-fzj46\" (UID: \"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\") " pod="openshift-image-registry/node-ca-fzj46" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.973868 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:27Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.987598 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:27Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:27 crc kubenswrapper[4619]: I0126 10:55:27.999066 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:27Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.008394 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.008630 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.008834 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.008920 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.008981 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:28Z","lastTransitionTime":"2026-01-26T10:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.020686 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.037001 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.048944 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.063877 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.085692 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.112207 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.112269 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.112280 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.112300 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.112311 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:28Z","lastTransitionTime":"2026-01-26T10:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.142106 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-fzj46" Jan 26 10:55:28 crc kubenswrapper[4619]: W0126 10:55:28.157216 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb491a22b_b179_42a8_bebd_4dfc7ae4cb71.slice/crio-6e8c3f25d0d5015b6f32959161efe8741d8331499032ccf6a54ba37f8640429a WatchSource:0}: Error finding container 6e8c3f25d0d5015b6f32959161efe8741d8331499032ccf6a54ba37f8640429a: Status 404 returned error can't find the container with id 6e8c3f25d0d5015b6f32959161efe8741d8331499032ccf6a54ba37f8640429a Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.220400 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.220825 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.220838 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.220869 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.220882 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:28Z","lastTransitionTime":"2026-01-26T10:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.231090 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 10:05:47.377424816 +0000 UTC Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.325505 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.325599 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.325615 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.325655 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.325670 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:28Z","lastTransitionTime":"2026-01-26T10:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.427924 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.427971 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.427981 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.427997 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.428007 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:28Z","lastTransitionTime":"2026-01-26T10:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.471714 4619 generic.go:334] "Generic (PLEG): container finished" podID="e3c5f7d0-80be-4cd1-8700-edae2eb1a04a" containerID="a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc" exitCode=0 Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.471825 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" event={"ID":"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a","Type":"ContainerDied","Data":"a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc"} Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.480661 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" event={"ID":"9ed93d0d-0709-4425-b378-6b8a15318070","Type":"ContainerStarted","Data":"a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8"} Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.484230 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-fzj46" event={"ID":"b491a22b-b179-42a8-bebd-4dfc7ae4cb71","Type":"ContainerStarted","Data":"c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb"} Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.484307 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-fzj46" event={"ID":"b491a22b-b179-42a8-bebd-4dfc7ae4cb71","Type":"ContainerStarted","Data":"6e8c3f25d0d5015b6f32959161efe8741d8331499032ccf6a54ba37f8640429a"} Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.491130 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.512505 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.526248 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.530846 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.530906 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.530922 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.530946 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.530962 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:28Z","lastTransitionTime":"2026-01-26T10:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.539760 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.558759 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.572367 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.589280 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.610109 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.626493 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.638318 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.638365 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.638376 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.638394 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.638406 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:28Z","lastTransitionTime":"2026-01-26T10:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.641000 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.658575 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.676656 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.692587 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.704982 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.720564 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.737138 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.740998 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.741030 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.741045 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.741063 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.741074 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:28Z","lastTransitionTime":"2026-01-26T10:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.753466 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.765426 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.775995 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.788406 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.802329 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.812233 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.824862 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.843423 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.843927 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.843965 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.843978 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.843999 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.844012 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:28Z","lastTransitionTime":"2026-01-26T10:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.857411 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.870036 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.883026 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.900582 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:28Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.946797 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.946836 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.946845 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.946860 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:28 crc kubenswrapper[4619]: I0126 10:55:28.946871 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:28Z","lastTransitionTime":"2026-01-26T10:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.050166 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.050208 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.050218 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.050237 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.050249 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:29Z","lastTransitionTime":"2026-01-26T10:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.152485 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.152530 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.152539 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.152555 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.152567 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:29Z","lastTransitionTime":"2026-01-26T10:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.207073 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.221557 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:29Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.231816 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:29Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.231783 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 21:50:54.297112177 +0000 UTC Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.242435 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:29Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.255226 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.255264 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.255272 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.255289 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.255299 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:29Z","lastTransitionTime":"2026-01-26T10:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.257757 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:29Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.260076 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.260076 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.260153 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:55:29 crc kubenswrapper[4619]: E0126 10:55:29.260302 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:55:29 crc kubenswrapper[4619]: E0126 10:55:29.260355 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:55:29 crc kubenswrapper[4619]: E0126 10:55:29.260421 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.271342 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:29Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.279660 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:29Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.289086 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:29Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.305803 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:29Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.318853 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:29Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.333396 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:29Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.348236 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:29Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.362307 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.362360 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.362565 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.362583 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.362596 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:29Z","lastTransitionTime":"2026-01-26T10:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.368934 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:29Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.381909 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:29Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.393082 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:29Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.465755 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.466337 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.466403 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.466488 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.466555 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:29Z","lastTransitionTime":"2026-01-26T10:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.490651 4619 generic.go:334] "Generic (PLEG): container finished" podID="e3c5f7d0-80be-4cd1-8700-edae2eb1a04a" containerID="ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105" exitCode=0 Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.490682 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" event={"ID":"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a","Type":"ContainerDied","Data":"ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105"} Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.510919 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:29Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.526566 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:29Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.555434 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:29Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.570202 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.570255 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.570268 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.570293 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.570308 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:29Z","lastTransitionTime":"2026-01-26T10:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.571587 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:29Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.587764 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:29Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.600658 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:29Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.612932 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:29Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.632871 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:29Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.644970 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:29Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.659256 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:29Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.673350 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:29Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.673877 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.673953 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.673965 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.673987 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.674000 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:29Z","lastTransitionTime":"2026-01-26T10:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.684278 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:29Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.699869 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:29Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.713939 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:29Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.776272 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.776585 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.776600 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.776630 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.776640 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:29Z","lastTransitionTime":"2026-01-26T10:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.879722 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.879764 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.879772 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.879793 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.879803 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:29Z","lastTransitionTime":"2026-01-26T10:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.982596 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.982671 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.982683 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.982705 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:29 crc kubenswrapper[4619]: I0126 10:55:29.982719 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:29Z","lastTransitionTime":"2026-01-26T10:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.085718 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.085759 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.085767 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.085785 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.085795 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:30Z","lastTransitionTime":"2026-01-26T10:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.188310 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.188346 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.188354 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.188372 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.188383 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:30Z","lastTransitionTime":"2026-01-26T10:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.232840 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 14:45:55.706200716 +0000 UTC Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.290320 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.290375 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.290385 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.290406 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.290418 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:30Z","lastTransitionTime":"2026-01-26T10:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.392820 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.392866 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.392878 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.392895 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.392907 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:30Z","lastTransitionTime":"2026-01-26T10:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.494847 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.494880 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.494890 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.494927 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.494937 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:30Z","lastTransitionTime":"2026-01-26T10:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.498923 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" event={"ID":"9ed93d0d-0709-4425-b378-6b8a15318070","Type":"ContainerStarted","Data":"7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee"} Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.502962 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" event={"ID":"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a","Type":"ContainerDied","Data":"9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e"} Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.503108 4619 generic.go:334] "Generic (PLEG): container finished" podID="e3c5f7d0-80be-4cd1-8700-edae2eb1a04a" containerID="9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e" exitCode=0 Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.516045 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:30Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.530192 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:30Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.546955 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:30Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.559578 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:30Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.572663 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:30Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.583793 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:30Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.595736 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:30Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.597843 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.597887 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.597897 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.597915 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.597927 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:30Z","lastTransitionTime":"2026-01-26T10:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.622208 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:30Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.638995 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:30Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.653885 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:30Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.672529 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:30Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.689466 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:30Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.701318 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.701351 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.701360 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.701375 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.701385 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:30Z","lastTransitionTime":"2026-01-26T10:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.704227 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:30Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.719185 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:30Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.803630 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.803679 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.803689 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.803707 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.803717 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:30Z","lastTransitionTime":"2026-01-26T10:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.906421 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.906462 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.906474 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.906492 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:30 crc kubenswrapper[4619]: I0126 10:55:30.906506 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:30Z","lastTransitionTime":"2026-01-26T10:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.009401 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.009436 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.009449 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.009468 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.009480 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:31Z","lastTransitionTime":"2026-01-26T10:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.112364 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.112415 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.112425 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.112445 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.112457 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:31Z","lastTransitionTime":"2026-01-26T10:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.128926 4619 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.215059 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.215107 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.215116 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.215132 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.215145 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:31Z","lastTransitionTime":"2026-01-26T10:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.233515 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 09:15:25.82039029 +0000 UTC Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.261453 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.261504 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:31 crc kubenswrapper[4619]: E0126 10:55:31.261609 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.261449 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:55:31 crc kubenswrapper[4619]: E0126 10:55:31.261797 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:55:31 crc kubenswrapper[4619]: E0126 10:55:31.261944 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.276230 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.286998 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.300632 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.312083 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.318190 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.318228 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.318237 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.318254 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.318265 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:31Z","lastTransitionTime":"2026-01-26T10:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.325767 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.342020 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.362005 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.377472 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.392440 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.409549 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.421475 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.421519 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.421544 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.421568 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.421580 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:31Z","lastTransitionTime":"2026-01-26T10:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.425256 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.436247 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.448306 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.469606 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.525691 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.525769 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.525977 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.526000 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.526011 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:31Z","lastTransitionTime":"2026-01-26T10:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.628674 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.628715 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.628727 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.628746 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.628769 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:31Z","lastTransitionTime":"2026-01-26T10:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.731707 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.731802 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.731814 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.731834 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.731846 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:31Z","lastTransitionTime":"2026-01-26T10:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.834347 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.834400 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.834415 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.834435 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.834449 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:31Z","lastTransitionTime":"2026-01-26T10:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.939148 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.939210 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.939225 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.939243 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:31 crc kubenswrapper[4619]: I0126 10:55:31.939587 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:31Z","lastTransitionTime":"2026-01-26T10:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.042228 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.042519 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.042666 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.042978 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.043064 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:32Z","lastTransitionTime":"2026-01-26T10:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.149311 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.149827 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.149837 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.149856 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.149869 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:32Z","lastTransitionTime":"2026-01-26T10:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.234390 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 09:02:06.218618441 +0000 UTC Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.252848 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.252898 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.252911 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.252929 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.252941 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:32Z","lastTransitionTime":"2026-01-26T10:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.355594 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.355686 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.355699 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.355724 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.355738 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:32Z","lastTransitionTime":"2026-01-26T10:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.458494 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.458538 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.458551 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.458569 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.458581 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:32Z","lastTransitionTime":"2026-01-26T10:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.515568 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" event={"ID":"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a","Type":"ContainerStarted","Data":"e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7"} Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.561233 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.561292 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.561304 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.561326 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.561345 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:32Z","lastTransitionTime":"2026-01-26T10:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.637111 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.637172 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.637184 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.637207 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.637221 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:32Z","lastTransitionTime":"2026-01-26T10:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:32 crc kubenswrapper[4619]: E0126 10:55:32.658085 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.663429 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.663716 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.663918 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.664083 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.664209 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:32Z","lastTransitionTime":"2026-01-26T10:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:32 crc kubenswrapper[4619]: E0126 10:55:32.686247 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.692588 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.692684 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.692707 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.692769 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.692787 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:32Z","lastTransitionTime":"2026-01-26T10:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:32 crc kubenswrapper[4619]: E0126 10:55:32.711117 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.717788 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.717877 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.717902 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.717932 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.717969 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:32Z","lastTransitionTime":"2026-01-26T10:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:32 crc kubenswrapper[4619]: E0126 10:55:32.735377 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.741987 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.742045 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.742060 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.742084 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.742102 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:32Z","lastTransitionTime":"2026-01-26T10:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:32 crc kubenswrapper[4619]: E0126 10:55:32.755913 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:32Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:32 crc kubenswrapper[4619]: E0126 10:55:32.756061 4619 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.759242 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.759297 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.759310 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.759333 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.759347 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:32Z","lastTransitionTime":"2026-01-26T10:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.861271 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.861318 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.861329 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.861348 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.861359 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:32Z","lastTransitionTime":"2026-01-26T10:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.965024 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.965081 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.965091 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.965112 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:32 crc kubenswrapper[4619]: I0126 10:55:32.965128 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:32Z","lastTransitionTime":"2026-01-26T10:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.102268 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.102319 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.102330 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.102351 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.102366 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:33Z","lastTransitionTime":"2026-01-26T10:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.204356 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.204405 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.204416 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.204435 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.204450 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:33Z","lastTransitionTime":"2026-01-26T10:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.235048 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 23:39:51.638915731 +0000 UTC Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.260569 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.260604 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:55:33 crc kubenswrapper[4619]: E0126 10:55:33.260741 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.260645 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:33 crc kubenswrapper[4619]: E0126 10:55:33.260919 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:55:33 crc kubenswrapper[4619]: E0126 10:55:33.260983 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.307046 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.307081 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.307090 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.307108 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.307120 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:33Z","lastTransitionTime":"2026-01-26T10:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.409303 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.409356 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.409369 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.409392 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.409407 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:33Z","lastTransitionTime":"2026-01-26T10:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.513338 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.513640 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.513719 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.513843 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.513933 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:33Z","lastTransitionTime":"2026-01-26T10:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.522231 4619 generic.go:334] "Generic (PLEG): container finished" podID="e3c5f7d0-80be-4cd1-8700-edae2eb1a04a" containerID="e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7" exitCode=0 Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.522341 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" event={"ID":"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a","Type":"ContainerDied","Data":"e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7"} Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.533713 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" event={"ID":"9ed93d0d-0709-4425-b378-6b8a15318070","Type":"ContainerStarted","Data":"a7817fe652b37064adffe6813924e690879103fc23d2ac09bca3a35f580cb84a"} Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.534219 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.534305 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.534363 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.538630 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.566871 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.567738 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.570989 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.583721 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.597649 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.622979 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.624225 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.624370 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.624457 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.624539 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.624605 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:33Z","lastTransitionTime":"2026-01-26T10:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.639781 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.654384 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.670973 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.686411 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.704033 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.720822 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.727320 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.727356 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.727366 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.727385 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.727397 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:33Z","lastTransitionTime":"2026-01-26T10:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.737269 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.751746 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.763470 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.777688 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.790181 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.800305 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.811639 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.829346 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.829385 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.829396 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.829414 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.829428 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:33Z","lastTransitionTime":"2026-01-26T10:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.834812 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7817fe652b37064adffe6813924e690879103fc23d2ac09bca3a35f580cb84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.852010 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.868031 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.888492 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.901479 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.933195 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.933241 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.933252 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.933272 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.933303 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:33Z","lastTransitionTime":"2026-01-26T10:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:33 crc kubenswrapper[4619]: I0126 10:55:33.940772 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:33Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.030666 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.035315 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.035349 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.035358 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.035376 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.035387 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:34Z","lastTransitionTime":"2026-01-26T10:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.044810 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.057189 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.069046 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.137294 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.137346 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.137358 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.137373 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.137384 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:34Z","lastTransitionTime":"2026-01-26T10:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.236097 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 20:19:51.234035951 +0000 UTC Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.241358 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.241414 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.241431 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.241463 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.241478 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:34Z","lastTransitionTime":"2026-01-26T10:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.343891 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.343931 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.343943 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.343960 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.343970 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:34Z","lastTransitionTime":"2026-01-26T10:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.449446 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.449502 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.450117 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.450168 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.450183 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:34Z","lastTransitionTime":"2026-01-26T10:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.545923 4619 generic.go:334] "Generic (PLEG): container finished" podID="e3c5f7d0-80be-4cd1-8700-edae2eb1a04a" containerID="a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7" exitCode=0 Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.546293 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" event={"ID":"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a","Type":"ContainerDied","Data":"a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7"} Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.553522 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.553582 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.553596 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.553646 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.553665 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:34Z","lastTransitionTime":"2026-01-26T10:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.562692 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.577177 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.590297 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.607455 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7817fe652b37064adffe6813924e690879103fc23d2ac09bca3a35f580cb84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.623086 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.635088 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.644677 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.655110 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.656525 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.656549 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.656557 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.656571 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.656581 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:34Z","lastTransitionTime":"2026-01-26T10:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.668330 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.678636 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.691695 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.702363 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.713463 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.726125 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:34Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.759096 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.759130 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.759137 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.759152 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.759161 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:34Z","lastTransitionTime":"2026-01-26T10:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.864869 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.864920 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.864929 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.864947 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.864960 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:34Z","lastTransitionTime":"2026-01-26T10:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.936605 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:55:34 crc kubenswrapper[4619]: E0126 10:55:34.936814 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:55:50.936785453 +0000 UTC m=+49.970826169 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.936883 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:34 crc kubenswrapper[4619]: E0126 10:55:34.937019 4619 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 10:55:34 crc kubenswrapper[4619]: E0126 10:55:34.937055 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 10:55:50.93704857 +0000 UTC m=+49.971089276 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.967435 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.967737 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.967875 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.967979 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:34 crc kubenswrapper[4619]: I0126 10:55:34.968065 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:34Z","lastTransitionTime":"2026-01-26T10:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.037948 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.038034 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.038118 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:55:35 crc kubenswrapper[4619]: E0126 10:55:35.038317 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 10:55:35 crc kubenswrapper[4619]: E0126 10:55:35.038353 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 10:55:35 crc kubenswrapper[4619]: E0126 10:55:35.038371 4619 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:55:35 crc kubenswrapper[4619]: E0126 10:55:35.038463 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 10:55:51.038436706 +0000 UTC m=+50.072477442 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:55:35 crc kubenswrapper[4619]: E0126 10:55:35.038535 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 10:55:35 crc kubenswrapper[4619]: E0126 10:55:35.038576 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 10:55:35 crc kubenswrapper[4619]: E0126 10:55:35.038591 4619 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:55:35 crc kubenswrapper[4619]: E0126 10:55:35.038668 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 10:55:51.038646292 +0000 UTC m=+50.072687008 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:55:35 crc kubenswrapper[4619]: E0126 10:55:35.038890 4619 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 10:55:35 crc kubenswrapper[4619]: E0126 10:55:35.039038 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 10:55:51.039021522 +0000 UTC m=+50.073062458 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.072109 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.072177 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.072305 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.072377 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.072425 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:35Z","lastTransitionTime":"2026-01-26T10:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.176354 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.176402 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.176412 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.176433 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.176448 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:35Z","lastTransitionTime":"2026-01-26T10:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.236995 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 01:38:19.680577448 +0000 UTC Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.262234 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:35 crc kubenswrapper[4619]: E0126 10:55:35.262398 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.262475 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:55:35 crc kubenswrapper[4619]: E0126 10:55:35.262607 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.262692 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:55:35 crc kubenswrapper[4619]: E0126 10:55:35.262833 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.279187 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.279238 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.279252 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.279271 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.279282 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:35Z","lastTransitionTime":"2026-01-26T10:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.382283 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.382323 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.382334 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.382351 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.382364 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:35Z","lastTransitionTime":"2026-01-26T10:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.484695 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.484745 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.484756 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.484777 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.484789 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:35Z","lastTransitionTime":"2026-01-26T10:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.553669 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" event={"ID":"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a","Type":"ContainerStarted","Data":"040ecfd813bfe1593da976b353abbf4b1e184e4bec225208352164785ed0d685"} Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.568082 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:35Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.587555 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7817fe652b37064adffe6813924e690879103fc23d2ac09bca3a35f580cb84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:35Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.587900 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.587915 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.587923 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.587939 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.587948 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:35Z","lastTransitionTime":"2026-01-26T10:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.601787 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:35Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.618049 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:35Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.630176 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:35Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.642538 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:35Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.657695 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:35Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.672880 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040ecfd813bfe1593da976b353abbf4b1e184e4bec225208352164785ed0d685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:35Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.688603 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:35Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.690101 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.690152 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.690163 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.690178 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.690188 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:35Z","lastTransitionTime":"2026-01-26T10:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.706094 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:35Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.722285 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:35Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.736045 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:35Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.752005 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:35Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.762499 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:35Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.793176 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.793214 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.793227 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.793246 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.793260 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:35Z","lastTransitionTime":"2026-01-26T10:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.895386 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.895486 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.895504 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.895530 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.895543 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:35Z","lastTransitionTime":"2026-01-26T10:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.998959 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.999011 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.999020 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.999039 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:35 crc kubenswrapper[4619]: I0126 10:55:35.999049 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:35Z","lastTransitionTime":"2026-01-26T10:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.101573 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.101664 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.101678 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.101760 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.101774 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:36Z","lastTransitionTime":"2026-01-26T10:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.204283 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.204328 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.204340 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.204361 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.204380 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:36Z","lastTransitionTime":"2026-01-26T10:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.237837 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 10:32:13.518967479 +0000 UTC Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.307185 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.307230 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.307239 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.307255 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.307265 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:36Z","lastTransitionTime":"2026-01-26T10:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.409676 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.409744 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.409754 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.409785 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.409797 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:36Z","lastTransitionTime":"2026-01-26T10:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.511656 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.511711 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.511720 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.511736 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.511747 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:36Z","lastTransitionTime":"2026-01-26T10:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.614310 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.614347 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.614357 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.614372 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.614384 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:36Z","lastTransitionTime":"2026-01-26T10:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.717373 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.717425 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.717434 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.717456 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.717465 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:36Z","lastTransitionTime":"2026-01-26T10:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.821288 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.821557 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.821644 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.821723 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.821789 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:36Z","lastTransitionTime":"2026-01-26T10:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.925024 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.925060 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.925070 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.925085 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:36 crc kubenswrapper[4619]: I0126 10:55:36.925094 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:36Z","lastTransitionTime":"2026-01-26T10:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.028200 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.028244 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.028259 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.028277 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.028289 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:37Z","lastTransitionTime":"2026-01-26T10:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.132485 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.132551 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.132572 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.132600 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.132644 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:37Z","lastTransitionTime":"2026-01-26T10:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.236111 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.236182 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.236201 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.236227 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.236247 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:37Z","lastTransitionTime":"2026-01-26T10:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.238286 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 05:09:48.759126096 +0000 UTC Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.260876 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.260914 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.260925 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:55:37 crc kubenswrapper[4619]: E0126 10:55:37.261123 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:55:37 crc kubenswrapper[4619]: E0126 10:55:37.261272 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:55:37 crc kubenswrapper[4619]: E0126 10:55:37.261351 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.339261 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.339351 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.339368 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.339399 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.339421 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:37Z","lastTransitionTime":"2026-01-26T10:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.442115 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.442157 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.442167 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.442183 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.442195 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:37Z","lastTransitionTime":"2026-01-26T10:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.497946 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q"] Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.498492 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.502151 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.502331 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.515544 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.529893 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.545450 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.545491 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.545520 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.545542 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.545552 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:37Z","lastTransitionTime":"2026-01-26T10:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.546254 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040ecfd813bfe1593da976b353abbf4b1e184e4bec225208352164785ed0d685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.560994 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6xtv_9ed93d0d-0709-4425-b378-6b8a15318070/ovnkube-controller/0.log" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.564028 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.566291 4619 generic.go:334] "Generic (PLEG): container finished" podID="9ed93d0d-0709-4425-b378-6b8a15318070" containerID="a7817fe652b37064adffe6813924e690879103fc23d2ac09bca3a35f580cb84a" exitCode=1 Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.566332 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" event={"ID":"9ed93d0d-0709-4425-b378-6b8a15318070","Type":"ContainerDied","Data":"a7817fe652b37064adffe6813924e690879103fc23d2ac09bca3a35f580cb84a"} Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.567094 4619 scope.go:117] "RemoveContainer" containerID="a7817fe652b37064adffe6813924e690879103fc23d2ac09bca3a35f580cb84a" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.576935 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.592266 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.606597 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.624597 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.645840 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.648287 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.648347 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.648365 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.648387 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.648398 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:37Z","lastTransitionTime":"2026-01-26T10:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.661332 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m6m7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.680305 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7817fe652b37064adffe6813924e690879103fc23d2ac09bca3a35f580cb84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.681270 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-m6m7q\" (UID: \"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.681316 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df-env-overrides\") pod \"ovnkube-control-plane-749d76644c-m6m7q\" (UID: \"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.681403 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-m6m7q\" (UID: \"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.681497 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2nnv\" (UniqueName: \"kubernetes.io/projected/7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df-kube-api-access-k2nnv\") pod \"ovnkube-control-plane-749d76644c-m6m7q\" (UID: \"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.696164 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.708985 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.718377 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.730385 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.740580 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.750419 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.750571 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.750661 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.750735 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.750791 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:37Z","lastTransitionTime":"2026-01-26T10:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.752471 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.764231 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.775722 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.782115 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-m6m7q\" (UID: \"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.782252 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df-env-overrides\") pod \"ovnkube-control-plane-749d76644c-m6m7q\" (UID: \"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.782338 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-m6m7q\" (UID: \"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.782424 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2nnv\" (UniqueName: \"kubernetes.io/projected/7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df-kube-api-access-k2nnv\") pod \"ovnkube-control-plane-749d76644c-m6m7q\" (UID: \"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.783223 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df-env-overrides\") pod \"ovnkube-control-plane-749d76644c-m6m7q\" (UID: \"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.783242 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-m6m7q\" (UID: \"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.787398 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.797072 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-m6m7q\" (UID: \"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.799505 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2nnv\" (UniqueName: \"kubernetes.io/projected/7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df-kube-api-access-k2nnv\") pod \"ovnkube-control-plane-749d76644c-m6m7q\" (UID: \"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.800185 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.812590 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.814297 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m6m7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:37 crc kubenswrapper[4619]: W0126 10:55:37.826531 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d1ba0a5_54cd_4f55_b3c9_cdd5c75e26df.slice/crio-1289f680940e774ae6c0ffcc586f18662b58a2a94de3fe42436c370ee46433db WatchSource:0}: Error finding container 1289f680940e774ae6c0ffcc586f18662b58a2a94de3fe42436c370ee46433db: Status 404 returned error can't find the container with id 1289f680940e774ae6c0ffcc586f18662b58a2a94de3fe42436c370ee46433db Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.832768 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.847257 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.852808 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.852837 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.852845 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.852861 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.852871 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:37Z","lastTransitionTime":"2026-01-26T10:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.860714 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.874404 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.893938 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7817fe652b37064adffe6813924e690879103fc23d2ac09bca3a35f580cb84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7817fe652b37064adffe6813924e690879103fc23d2ac09bca3a35f580cb84a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"message\\\":\\\" 5797 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 10:55:36.191052 5797 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 10:55:36.191116 5797 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 10:55:36.191131 5797 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 10:55:36.191136 5797 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 10:55:36.191175 5797 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 10:55:36.191187 5797 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 10:55:36.191189 5797 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 10:55:36.191177 5797 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 10:55:36.191209 5797 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 10:55:36.191208 5797 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 10:55:36.191218 5797 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 10:55:36.191227 5797 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 10:55:36.191228 5797 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 10:55:36.191232 5797 factory.go:656] Stopping watch factory\\\\nI0126 10:55:36.191250 5797 ovnkube.go:599] Stopped ovnkube\\\\nI0126 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.907545 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.919037 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.936232 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040ecfd813bfe1593da976b353abbf4b1e184e4bec225208352164785ed0d685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:37Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.955299 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.955335 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.955344 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.955360 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:37 crc kubenswrapper[4619]: I0126 10:55:37.955369 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:37Z","lastTransitionTime":"2026-01-26T10:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.058065 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.058116 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.058126 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.058144 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.058154 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:38Z","lastTransitionTime":"2026-01-26T10:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.160927 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.160967 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.160976 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.160992 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.161042 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:38Z","lastTransitionTime":"2026-01-26T10:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.238528 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 00:21:15.556468815 +0000 UTC Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.263583 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.263640 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.263651 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.263667 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.263680 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:38Z","lastTransitionTime":"2026-01-26T10:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.366170 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.366236 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.366248 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.366267 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.366282 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:38Z","lastTransitionTime":"2026-01-26T10:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.474391 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.474460 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.474474 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.474498 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.474510 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:38Z","lastTransitionTime":"2026-01-26T10:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.571767 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" event={"ID":"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df","Type":"ContainerStarted","Data":"bcbb52b66491323889833d0a5db94cf686a9edb6629b5fb0dda213ffef3c8f1d"} Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.572175 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" event={"ID":"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df","Type":"ContainerStarted","Data":"0b771a4b98ddb7b088189501e68a744900bc39e69b33ff54e6bbe326218bf25a"} Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.572291 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" event={"ID":"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df","Type":"ContainerStarted","Data":"1289f680940e774ae6c0ffcc586f18662b58a2a94de3fe42436c370ee46433db"} Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.574564 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6xtv_9ed93d0d-0709-4425-b378-6b8a15318070/ovnkube-controller/0.log" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.575730 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.575753 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.575769 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.575783 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.575796 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:38Z","lastTransitionTime":"2026-01-26T10:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.577812 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" event={"ID":"9ed93d0d-0709-4425-b378-6b8a15318070","Type":"ContainerStarted","Data":"670ebc2a7a94f116cf3555215d32995f8ce347a2299a6436545a969eac3ca6c6"} Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.578268 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.587073 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.608650 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.629735 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040ecfd813bfe1593da976b353abbf4b1e184e4bec225208352164785ed0d685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.634795 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-bs2t7"] Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.635402 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:55:38 crc kubenswrapper[4619]: E0126 10:55:38.635498 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.646686 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.659275 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.670658 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.678767 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.678809 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.678819 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.678837 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.678848 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:38Z","lastTransitionTime":"2026-01-26T10:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.686719 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.689855 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a4ef536-778e-47e5-afb2-539e96eba778-metrics-certs\") pod \"network-metrics-daemon-bs2t7\" (UID: \"6a4ef536-778e-47e5-afb2-539e96eba778\") " pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.689920 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44sfz\" (UniqueName: \"kubernetes.io/projected/6a4ef536-778e-47e5-afb2-539e96eba778-kube-api-access-44sfz\") pod \"network-metrics-daemon-bs2t7\" (UID: \"6a4ef536-778e-47e5-afb2-539e96eba778\") " pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.699958 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.747477 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b771a4b98ddb7b088189501e68a744900bc39e69b33ff54e6bbe326218bf25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbb52b66491323889833d0a5db94cf686a9edb6629b5fb0dda213ffef3c8f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m6m7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.761178 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.777383 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.780983 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.781008 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.781016 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.781031 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.781041 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:38Z","lastTransitionTime":"2026-01-26T10:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.788236 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.790442 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44sfz\" (UniqueName: \"kubernetes.io/projected/6a4ef536-778e-47e5-afb2-539e96eba778-kube-api-access-44sfz\") pod \"network-metrics-daemon-bs2t7\" (UID: \"6a4ef536-778e-47e5-afb2-539e96eba778\") " pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.790504 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a4ef536-778e-47e5-afb2-539e96eba778-metrics-certs\") pod \"network-metrics-daemon-bs2t7\" (UID: \"6a4ef536-778e-47e5-afb2-539e96eba778\") " pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:55:38 crc kubenswrapper[4619]: E0126 10:55:38.790710 4619 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 10:55:38 crc kubenswrapper[4619]: E0126 10:55:38.790826 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a4ef536-778e-47e5-afb2-539e96eba778-metrics-certs podName:6a4ef536-778e-47e5-afb2-539e96eba778 nodeName:}" failed. No retries permitted until 2026-01-26 10:55:39.29080015 +0000 UTC m=+38.324840866 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a4ef536-778e-47e5-afb2-539e96eba778-metrics-certs") pod "network-metrics-daemon-bs2t7" (UID: "6a4ef536-778e-47e5-afb2-539e96eba778") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.799533 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.814454 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44sfz\" (UniqueName: \"kubernetes.io/projected/6a4ef536-778e-47e5-afb2-539e96eba778-kube-api-access-44sfz\") pod \"network-metrics-daemon-bs2t7\" (UID: \"6a4ef536-778e-47e5-afb2-539e96eba778\") " pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.820325 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7817fe652b37064adffe6813924e690879103fc23d2ac09bca3a35f580cb84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7817fe652b37064adffe6813924e690879103fc23d2ac09bca3a35f580cb84a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"message\\\":\\\" 5797 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 10:55:36.191052 5797 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 10:55:36.191116 5797 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 10:55:36.191131 5797 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 10:55:36.191136 5797 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 10:55:36.191175 5797 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 10:55:36.191187 5797 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 10:55:36.191189 5797 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 10:55:36.191177 5797 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 10:55:36.191209 5797 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 10:55:36.191208 5797 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 10:55:36.191218 5797 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 10:55:36.191227 5797 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 10:55:36.191228 5797 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 10:55:36.191232 5797 factory.go:656] Stopping watch factory\\\\nI0126 10:55:36.191250 5797 ovnkube.go:599] Stopped ovnkube\\\\nI0126 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.836767 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.848774 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.860251 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b771a4b98ddb7b088189501e68a744900bc39e69b33ff54e6bbe326218bf25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbb52b66491323889833d0a5db94cf686a9edb6629b5fb0dda213ffef3c8f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m6m7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.872427 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.883457 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.883502 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.883511 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.883525 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.883536 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:38Z","lastTransitionTime":"2026-01-26T10:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.885382 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.897761 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.908250 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.919607 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.938442 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://670ebc2a7a94f116cf3555215d32995f8ce347a2299a6436545a969eac3ca6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7817fe652b37064adffe6813924e690879103fc23d2ac09bca3a35f580cb84a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"message\\\":\\\" 5797 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 10:55:36.191052 5797 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 10:55:36.191116 5797 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 10:55:36.191131 5797 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 10:55:36.191136 5797 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 10:55:36.191175 5797 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 10:55:36.191187 5797 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 10:55:36.191189 5797 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 10:55:36.191177 5797 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 10:55:36.191209 5797 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 10:55:36.191208 5797 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 10:55:36.191218 5797 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 10:55:36.191227 5797 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 10:55:36.191228 5797 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 10:55:36.191232 5797 factory.go:656] Stopping watch factory\\\\nI0126 10:55:36.191250 5797 ovnkube.go:599] Stopped ovnkube\\\\nI0126 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.950939 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bs2t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4ef536-778e-47e5-afb2-539e96eba778\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bs2t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.964395 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.976114 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.985950 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.986008 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.986018 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.986040 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.986051 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:38Z","lastTransitionTime":"2026-01-26T10:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:38 crc kubenswrapper[4619]: I0126 10:55:38.990635 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040ecfd813bfe1593da976b353abbf4b1e184e4bec225208352164785ed0d685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.004392 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:39Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.018268 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:39Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.030388 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:39Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.041201 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:39Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.088927 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.088974 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.088983 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.089002 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.089013 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:39Z","lastTransitionTime":"2026-01-26T10:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.191716 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.191761 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.191770 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.191788 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.191798 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:39Z","lastTransitionTime":"2026-01-26T10:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.239229 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 03:14:48.028773496 +0000 UTC Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.260668 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.260702 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.260712 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:55:39 crc kubenswrapper[4619]: E0126 10:55:39.260822 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:55:39 crc kubenswrapper[4619]: E0126 10:55:39.260938 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:55:39 crc kubenswrapper[4619]: E0126 10:55:39.261035 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.293749 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a4ef536-778e-47e5-afb2-539e96eba778-metrics-certs\") pod \"network-metrics-daemon-bs2t7\" (UID: \"6a4ef536-778e-47e5-afb2-539e96eba778\") " pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:55:39 crc kubenswrapper[4619]: E0126 10:55:39.293954 4619 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 10:55:39 crc kubenswrapper[4619]: E0126 10:55:39.294068 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a4ef536-778e-47e5-afb2-539e96eba778-metrics-certs podName:6a4ef536-778e-47e5-afb2-539e96eba778 nodeName:}" failed. No retries permitted until 2026-01-26 10:55:40.294039597 +0000 UTC m=+39.328080393 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a4ef536-778e-47e5-afb2-539e96eba778-metrics-certs") pod "network-metrics-daemon-bs2t7" (UID: "6a4ef536-778e-47e5-afb2-539e96eba778") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.294856 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.294885 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.294897 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.294915 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.294925 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:39Z","lastTransitionTime":"2026-01-26T10:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.399052 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.399123 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.399140 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.399169 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.399191 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:39Z","lastTransitionTime":"2026-01-26T10:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.501697 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.501749 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.501759 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.501782 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.501792 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:39Z","lastTransitionTime":"2026-01-26T10:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.583775 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6xtv_9ed93d0d-0709-4425-b378-6b8a15318070/ovnkube-controller/1.log" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.584394 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6xtv_9ed93d0d-0709-4425-b378-6b8a15318070/ovnkube-controller/0.log" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.587599 4619 generic.go:334] "Generic (PLEG): container finished" podID="9ed93d0d-0709-4425-b378-6b8a15318070" containerID="670ebc2a7a94f116cf3555215d32995f8ce347a2299a6436545a969eac3ca6c6" exitCode=1 Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.587643 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" event={"ID":"9ed93d0d-0709-4425-b378-6b8a15318070","Type":"ContainerDied","Data":"670ebc2a7a94f116cf3555215d32995f8ce347a2299a6436545a969eac3ca6c6"} Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.587723 4619 scope.go:117] "RemoveContainer" containerID="a7817fe652b37064adffe6813924e690879103fc23d2ac09bca3a35f580cb84a" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.588524 4619 scope.go:117] "RemoveContainer" containerID="670ebc2a7a94f116cf3555215d32995f8ce347a2299a6436545a969eac3ca6c6" Jan 26 10:55:39 crc kubenswrapper[4619]: E0126 10:55:39.588741 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b6xtv_openshift-ovn-kubernetes(9ed93d0d-0709-4425-b378-6b8a15318070)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.604867 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.604913 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.604924 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.604939 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.604950 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:39Z","lastTransitionTime":"2026-01-26T10:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.607268 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://670ebc2a7a94f116cf3555215d32995f8ce347a2299a6436545a969eac3ca6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7817fe652b37064adffe6813924e690879103fc23d2ac09bca3a35f580cb84a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"message\\\":\\\" 5797 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 10:55:36.191052 5797 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 10:55:36.191116 5797 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 10:55:36.191131 5797 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 10:55:36.191136 5797 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 10:55:36.191175 5797 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 10:55:36.191187 5797 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 10:55:36.191189 5797 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 10:55:36.191177 5797 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 10:55:36.191209 5797 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 10:55:36.191208 5797 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 10:55:36.191218 5797 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 10:55:36.191227 5797 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 10:55:36.191228 5797 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 10:55:36.191232 5797 factory.go:656] Stopping watch factory\\\\nI0126 10:55:36.191250 5797 ovnkube.go:599] Stopped ovnkube\\\\nI0126 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670ebc2a7a94f116cf3555215d32995f8ce347a2299a6436545a969eac3ca6c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"Retry object setup: *v1.Pod openshift-multus/multus-684hz\\\\nI0126 10:55:38.711905 5994 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0126 10:55:38.711909 5994 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-684hz\\\\nI0126 10:55:38.711919 5994 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-684hz in node crc\\\\nF0126 10:55:38.711919 5994 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z]\\\\nI0126 10:55:38.711926 5994 obj_retry.go:386] Retry successful for *v1.Pod openshift-mul\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:39Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.620681 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bs2t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4ef536-778e-47e5-afb2-539e96eba778\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bs2t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:39Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.635003 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:39Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.648675 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:39Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.662329 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:39Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.675476 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:39Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.691329 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:39Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.704989 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:39Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.707514 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.707551 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.707560 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.707581 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.707591 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:39Z","lastTransitionTime":"2026-01-26T10:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.720557 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040ecfd813bfe1593da976b353abbf4b1e184e4bec225208352164785ed0d685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:39Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.733416 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:39Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.744812 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:39Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.758063 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:39Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.771649 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:39Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.781956 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:39Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.792634 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:39Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.803943 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b771a4b98ddb7b088189501e68a744900bc39e69b33ff54e6bbe326218bf25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbb52b66491323889833d0a5db94cf686a9edb6629b5fb0dda213ffef3c8f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m6m7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:39Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.810064 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.810094 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.810103 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.810121 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.810132 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:39Z","lastTransitionTime":"2026-01-26T10:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.913171 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.913235 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.913254 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.913275 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:39 crc kubenswrapper[4619]: I0126 10:55:39.913287 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:39Z","lastTransitionTime":"2026-01-26T10:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.015748 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.015796 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.015809 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.015830 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.015843 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:40Z","lastTransitionTime":"2026-01-26T10:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.136042 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.136081 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.136092 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.136111 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.136126 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:40Z","lastTransitionTime":"2026-01-26T10:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.238319 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.238363 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.238372 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.238403 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.238414 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:40Z","lastTransitionTime":"2026-01-26T10:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.239378 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 03:23:59.129980715 +0000 UTC Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.260775 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:55:40 crc kubenswrapper[4619]: E0126 10:55:40.260951 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.337006 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a4ef536-778e-47e5-afb2-539e96eba778-metrics-certs\") pod \"network-metrics-daemon-bs2t7\" (UID: \"6a4ef536-778e-47e5-afb2-539e96eba778\") " pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:55:40 crc kubenswrapper[4619]: E0126 10:55:40.337175 4619 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 10:55:40 crc kubenswrapper[4619]: E0126 10:55:40.337244 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a4ef536-778e-47e5-afb2-539e96eba778-metrics-certs podName:6a4ef536-778e-47e5-afb2-539e96eba778 nodeName:}" failed. No retries permitted until 2026-01-26 10:55:42.337226688 +0000 UTC m=+41.371267404 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a4ef536-778e-47e5-afb2-539e96eba778-metrics-certs") pod "network-metrics-daemon-bs2t7" (UID: "6a4ef536-778e-47e5-afb2-539e96eba778") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.340484 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.340518 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.340529 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.340547 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.340559 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:40Z","lastTransitionTime":"2026-01-26T10:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.442959 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.443032 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.443043 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.443062 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.443077 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:40Z","lastTransitionTime":"2026-01-26T10:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.546339 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.546741 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.546754 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.546774 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.546784 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:40Z","lastTransitionTime":"2026-01-26T10:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.592791 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6xtv_9ed93d0d-0709-4425-b378-6b8a15318070/ovnkube-controller/1.log" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.596291 4619 scope.go:117] "RemoveContainer" containerID="670ebc2a7a94f116cf3555215d32995f8ce347a2299a6436545a969eac3ca6c6" Jan 26 10:55:40 crc kubenswrapper[4619]: E0126 10:55:40.596450 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b6xtv_openshift-ovn-kubernetes(9ed93d0d-0709-4425-b378-6b8a15318070)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.615189 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.630418 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.648794 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.649973 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.650028 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.650043 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.650065 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.650082 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:40Z","lastTransitionTime":"2026-01-26T10:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.671965 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.687758 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.698566 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.710402 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b771a4b98ddb7b088189501e68a744900bc39e69b33ff54e6bbe326218bf25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbb52b66491323889833d0a5db94cf686a9edb6629b5fb0dda213ffef3c8f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m6m7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.728934 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.745253 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.753646 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.753734 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.753745 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.753766 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.753778 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:40Z","lastTransitionTime":"2026-01-26T10:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.757546 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.774906 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.800917 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://670ebc2a7a94f116cf3555215d32995f8ce347a2299a6436545a969eac3ca6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670ebc2a7a94f116cf3555215d32995f8ce347a2299a6436545a969eac3ca6c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"Retry object setup: *v1.Pod openshift-multus/multus-684hz\\\\nI0126 10:55:38.711905 5994 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0126 10:55:38.711909 5994 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-684hz\\\\nI0126 10:55:38.711919 5994 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-684hz in node crc\\\\nF0126 10:55:38.711919 5994 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z]\\\\nI0126 10:55:38.711926 5994 obj_retry.go:386] Retry successful for *v1.Pod openshift-mul\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b6xtv_openshift-ovn-kubernetes(9ed93d0d-0709-4425-b378-6b8a15318070)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.813202 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bs2t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4ef536-778e-47e5-afb2-539e96eba778\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bs2t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.830904 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.844534 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.856641 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.856692 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.856704 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.856726 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.856738 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:40Z","lastTransitionTime":"2026-01-26T10:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.865777 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040ecfd813bfe1593da976b353abbf4b1e184e4bec225208352164785ed0d685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:40Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.958902 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.958942 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.958952 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.958979 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:40 crc kubenswrapper[4619]: I0126 10:55:40.958991 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:40Z","lastTransitionTime":"2026-01-26T10:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.062132 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.062175 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.062187 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.062206 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.062219 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:41Z","lastTransitionTime":"2026-01-26T10:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.165146 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.165214 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.165231 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.165256 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.165270 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:41Z","lastTransitionTime":"2026-01-26T10:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.240127 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 21:54:43.175172793 +0000 UTC Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.260173 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:55:41 crc kubenswrapper[4619]: E0126 10:55:41.260368 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.260464 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:41 crc kubenswrapper[4619]: E0126 10:55:41.260682 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.260848 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:55:41 crc kubenswrapper[4619]: E0126 10:55:41.260954 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.268027 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.268102 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.268116 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.268163 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.268180 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:41Z","lastTransitionTime":"2026-01-26T10:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.290226 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://670ebc2a7a94f116cf3555215d32995f8ce347a2299a6436545a969eac3ca6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670ebc2a7a94f116cf3555215d32995f8ce347a2299a6436545a969eac3ca6c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"Retry object setup: *v1.Pod openshift-multus/multus-684hz\\\\nI0126 10:55:38.711905 5994 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0126 10:55:38.711909 5994 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-684hz\\\\nI0126 10:55:38.711919 5994 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-684hz in node crc\\\\nF0126 10:55:38.711919 5994 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z]\\\\nI0126 10:55:38.711926 5994 obj_retry.go:386] Retry successful for *v1.Pod openshift-mul\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b6xtv_openshift-ovn-kubernetes(9ed93d0d-0709-4425-b378-6b8a15318070)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.304686 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bs2t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4ef536-778e-47e5-afb2-539e96eba778\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bs2t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.322353 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.337316 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.349843 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.370742 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.371044 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.371160 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.371280 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.371376 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:41Z","lastTransitionTime":"2026-01-26T10:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.371111 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.393893 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.410184 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.430296 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040ecfd813bfe1593da976b353abbf4b1e184e4bec225208352164785ed0d685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.453215 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.471425 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.474521 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.474550 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.474559 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.474575 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.474586 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:41Z","lastTransitionTime":"2026-01-26T10:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.485978 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.499189 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.511073 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.520369 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.533749 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b771a4b98ddb7b088189501e68a744900bc39e69b33ff54e6bbe326218bf25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbb52b66491323889833d0a5db94cf686a9edb6629b5fb0dda213ffef3c8f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m6m7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.577011 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.577395 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.577638 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.577824 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.577994 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:41Z","lastTransitionTime":"2026-01-26T10:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.680229 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.680838 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.680955 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.681057 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.681141 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:41Z","lastTransitionTime":"2026-01-26T10:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.783295 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.783340 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.783349 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.783364 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.783375 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:41Z","lastTransitionTime":"2026-01-26T10:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.885790 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.885834 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.885846 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.885871 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.885882 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:41Z","lastTransitionTime":"2026-01-26T10:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.989291 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.989358 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.989372 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.989395 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:41 crc kubenswrapper[4619]: I0126 10:55:41.989410 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:41Z","lastTransitionTime":"2026-01-26T10:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.092579 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.092645 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.092655 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.092674 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.092684 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:42Z","lastTransitionTime":"2026-01-26T10:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.196787 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.196848 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.196912 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.196932 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.196943 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:42Z","lastTransitionTime":"2026-01-26T10:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.241055 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 01:16:03.682648389 +0000 UTC Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.260760 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:55:42 crc kubenswrapper[4619]: E0126 10:55:42.260927 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.298959 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.299004 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.299014 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.299031 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.299075 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:42Z","lastTransitionTime":"2026-01-26T10:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.355905 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a4ef536-778e-47e5-afb2-539e96eba778-metrics-certs\") pod \"network-metrics-daemon-bs2t7\" (UID: \"6a4ef536-778e-47e5-afb2-539e96eba778\") " pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:55:42 crc kubenswrapper[4619]: E0126 10:55:42.356094 4619 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 10:55:42 crc kubenswrapper[4619]: E0126 10:55:42.356172 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a4ef536-778e-47e5-afb2-539e96eba778-metrics-certs podName:6a4ef536-778e-47e5-afb2-539e96eba778 nodeName:}" failed. No retries permitted until 2026-01-26 10:55:46.356151337 +0000 UTC m=+45.390192053 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a4ef536-778e-47e5-afb2-539e96eba778-metrics-certs") pod "network-metrics-daemon-bs2t7" (UID: "6a4ef536-778e-47e5-afb2-539e96eba778") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.401917 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.402163 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.402294 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.402397 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.402462 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:42Z","lastTransitionTime":"2026-01-26T10:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.504858 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.505279 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.505414 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.505522 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.505659 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:42Z","lastTransitionTime":"2026-01-26T10:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.608388 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.608427 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.608437 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.608454 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.608463 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:42Z","lastTransitionTime":"2026-01-26T10:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.710210 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.710246 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.710256 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.710272 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.710283 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:42Z","lastTransitionTime":"2026-01-26T10:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.813297 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.813342 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.813355 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.813372 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.813381 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:42Z","lastTransitionTime":"2026-01-26T10:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.917559 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.917950 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.918103 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.918191 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.918266 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:42Z","lastTransitionTime":"2026-01-26T10:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.925289 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.925437 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.925524 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.925703 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.925823 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:42Z","lastTransitionTime":"2026-01-26T10:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:42 crc kubenswrapper[4619]: E0126 10:55:42.941752 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:42Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.945785 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.945826 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.945836 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.945854 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.945867 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:42Z","lastTransitionTime":"2026-01-26T10:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:42 crc kubenswrapper[4619]: E0126 10:55:42.961443 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:42Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.965106 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.965151 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.965167 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.965190 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.965205 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:42Z","lastTransitionTime":"2026-01-26T10:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:42 crc kubenswrapper[4619]: E0126 10:55:42.977303 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:42Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.981357 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.981420 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.981433 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.981456 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.981470 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:42Z","lastTransitionTime":"2026-01-26T10:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:42 crc kubenswrapper[4619]: E0126 10:55:42.995411 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:42Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.999249 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.999373 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.999433 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.999505 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:42 crc kubenswrapper[4619]: I0126 10:55:42.999575 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:42Z","lastTransitionTime":"2026-01-26T10:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:43 crc kubenswrapper[4619]: E0126 10:55:43.013326 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:43Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:43 crc kubenswrapper[4619]: E0126 10:55:43.013455 4619 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.021079 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.021127 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.021139 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.021196 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.021210 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:43Z","lastTransitionTime":"2026-01-26T10:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.123951 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.123987 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.123995 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.124013 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.124022 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:43Z","lastTransitionTime":"2026-01-26T10:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.226980 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.227035 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.227053 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.227079 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.227097 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:43Z","lastTransitionTime":"2026-01-26T10:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.241456 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 13:05:40.070494729 +0000 UTC Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.260977 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.261131 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.260977 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:55:43 crc kubenswrapper[4619]: E0126 10:55:43.261301 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:55:43 crc kubenswrapper[4619]: E0126 10:55:43.262121 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:55:43 crc kubenswrapper[4619]: E0126 10:55:43.262388 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.331274 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.331328 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.331340 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.331363 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.331379 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:43Z","lastTransitionTime":"2026-01-26T10:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.434752 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.434807 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.434820 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.434844 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.434856 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:43Z","lastTransitionTime":"2026-01-26T10:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.537352 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.537393 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.537401 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.537419 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.537429 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:43Z","lastTransitionTime":"2026-01-26T10:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.639981 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.640032 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.640044 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.640069 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.640084 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:43Z","lastTransitionTime":"2026-01-26T10:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.743188 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.743249 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.743266 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.743290 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.743303 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:43Z","lastTransitionTime":"2026-01-26T10:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.845778 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.845829 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.845841 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.845862 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.845875 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:43Z","lastTransitionTime":"2026-01-26T10:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.948743 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.948793 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.948806 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.948825 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:43 crc kubenswrapper[4619]: I0126 10:55:43.948841 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:43Z","lastTransitionTime":"2026-01-26T10:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.052597 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.052666 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.052683 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.052707 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.052721 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:44Z","lastTransitionTime":"2026-01-26T10:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.156282 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.156347 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.156359 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.156385 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.156401 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:44Z","lastTransitionTime":"2026-01-26T10:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.242673 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 03:52:41.05838357 +0000 UTC Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.259216 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.259280 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.259301 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.259326 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.259342 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:44Z","lastTransitionTime":"2026-01-26T10:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.260192 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:55:44 crc kubenswrapper[4619]: E0126 10:55:44.260446 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.362951 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.363008 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.363019 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.363037 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.363047 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:44Z","lastTransitionTime":"2026-01-26T10:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.465871 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.465931 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.465951 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.465977 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.465995 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:44Z","lastTransitionTime":"2026-01-26T10:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.574219 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.574346 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.574377 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.574408 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.574429 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:44Z","lastTransitionTime":"2026-01-26T10:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.678501 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.678569 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.678588 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.678648 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.678667 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:44Z","lastTransitionTime":"2026-01-26T10:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.782503 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.782564 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.782580 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.782608 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.782654 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:44Z","lastTransitionTime":"2026-01-26T10:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.887148 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.887277 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.887304 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.887338 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.887360 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:44Z","lastTransitionTime":"2026-01-26T10:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.992413 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.993371 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.993516 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.993785 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:44 crc kubenswrapper[4619]: I0126 10:55:44.994057 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:44Z","lastTransitionTime":"2026-01-26T10:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.097957 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.098039 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.098061 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.098099 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.098125 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:45Z","lastTransitionTime":"2026-01-26T10:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.202222 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.202301 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.202322 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.202349 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.202370 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:45Z","lastTransitionTime":"2026-01-26T10:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.243729 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 05:32:09.798260057 +0000 UTC Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.260530 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:45 crc kubenswrapper[4619]: E0126 10:55:45.260786 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.261199 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.261224 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:55:45 crc kubenswrapper[4619]: E0126 10:55:45.261541 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:55:45 crc kubenswrapper[4619]: E0126 10:55:45.262793 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.306047 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.306132 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.306153 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.306184 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.306206 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:45Z","lastTransitionTime":"2026-01-26T10:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.410289 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.410355 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.410372 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.410397 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.410417 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:45Z","lastTransitionTime":"2026-01-26T10:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.514420 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.514491 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.514509 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.514536 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.514557 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:45Z","lastTransitionTime":"2026-01-26T10:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.617750 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.617830 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.617852 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.617882 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.617902 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:45Z","lastTransitionTime":"2026-01-26T10:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.720977 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.721025 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.721036 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.721059 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.721072 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:45Z","lastTransitionTime":"2026-01-26T10:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.825002 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.825084 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.825107 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.825140 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.825164 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:45Z","lastTransitionTime":"2026-01-26T10:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.928332 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.928403 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.928426 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.928457 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:45 crc kubenswrapper[4619]: I0126 10:55:45.928480 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:45Z","lastTransitionTime":"2026-01-26T10:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.032429 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.032500 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.032518 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.032551 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.032570 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:46Z","lastTransitionTime":"2026-01-26T10:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.135458 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.135828 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.135980 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.136109 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.136262 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:46Z","lastTransitionTime":"2026-01-26T10:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.239282 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.239331 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.239341 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.239359 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.239371 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:46Z","lastTransitionTime":"2026-01-26T10:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.244573 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 19:25:33.613384472 +0000 UTC Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.260984 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:55:46 crc kubenswrapper[4619]: E0126 10:55:46.261172 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.343194 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.343262 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.343279 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.343311 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.343328 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:46Z","lastTransitionTime":"2026-01-26T10:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.401163 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a4ef536-778e-47e5-afb2-539e96eba778-metrics-certs\") pod \"network-metrics-daemon-bs2t7\" (UID: \"6a4ef536-778e-47e5-afb2-539e96eba778\") " pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:55:46 crc kubenswrapper[4619]: E0126 10:55:46.401346 4619 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 10:55:46 crc kubenswrapper[4619]: E0126 10:55:46.401421 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a4ef536-778e-47e5-afb2-539e96eba778-metrics-certs podName:6a4ef536-778e-47e5-afb2-539e96eba778 nodeName:}" failed. No retries permitted until 2026-01-26 10:55:54.401401148 +0000 UTC m=+53.435441864 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a4ef536-778e-47e5-afb2-539e96eba778-metrics-certs") pod "network-metrics-daemon-bs2t7" (UID: "6a4ef536-778e-47e5-afb2-539e96eba778") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.446121 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.446424 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.446558 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.446737 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.446880 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:46Z","lastTransitionTime":"2026-01-26T10:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.549880 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.549938 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.549958 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.549985 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.550003 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:46Z","lastTransitionTime":"2026-01-26T10:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.652448 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.652480 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.652490 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.652505 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.652516 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:46Z","lastTransitionTime":"2026-01-26T10:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.755544 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.755606 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.755660 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.755689 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.755709 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:46Z","lastTransitionTime":"2026-01-26T10:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.859020 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.859062 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.859072 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.859094 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.859106 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:46Z","lastTransitionTime":"2026-01-26T10:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.968067 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.968145 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.968159 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.968182 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:46 crc kubenswrapper[4619]: I0126 10:55:46.968197 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:46Z","lastTransitionTime":"2026-01-26T10:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.071876 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.071929 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.071941 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.071961 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.071998 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:47Z","lastTransitionTime":"2026-01-26T10:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.175098 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.175207 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.175225 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.175246 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.175258 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:47Z","lastTransitionTime":"2026-01-26T10:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.245567 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 16:19:04.640456904 +0000 UTC Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.261331 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.261360 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:47 crc kubenswrapper[4619]: E0126 10:55:47.261536 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:55:47 crc kubenswrapper[4619]: E0126 10:55:47.261696 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.261827 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:55:47 crc kubenswrapper[4619]: E0126 10:55:47.261905 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.278252 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.278320 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.278333 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.278357 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.278371 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:47Z","lastTransitionTime":"2026-01-26T10:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.381536 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.381596 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.381632 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.381657 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.381672 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:47Z","lastTransitionTime":"2026-01-26T10:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.483693 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.483745 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.483758 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.483779 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.483792 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:47Z","lastTransitionTime":"2026-01-26T10:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.586357 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.586403 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.586414 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.586433 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.586444 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:47Z","lastTransitionTime":"2026-01-26T10:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.689364 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.689423 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.689432 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.689468 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.689478 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:47Z","lastTransitionTime":"2026-01-26T10:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.792056 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.792108 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.792119 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.792138 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.792148 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:47Z","lastTransitionTime":"2026-01-26T10:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.894485 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.894523 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.894532 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.894549 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.894558 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:47Z","lastTransitionTime":"2026-01-26T10:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.998272 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.998327 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.998338 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.998362 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:47 crc kubenswrapper[4619]: I0126 10:55:47.998372 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:47Z","lastTransitionTime":"2026-01-26T10:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.101274 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.101323 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.101331 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.101348 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.101359 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:48Z","lastTransitionTime":"2026-01-26T10:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.207961 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.208013 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.208022 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.208043 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.208055 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:48Z","lastTransitionTime":"2026-01-26T10:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.246112 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 09:21:50.881806969 +0000 UTC Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.260677 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:55:48 crc kubenswrapper[4619]: E0126 10:55:48.260967 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.310126 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.310187 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.310206 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.310236 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.310254 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:48Z","lastTransitionTime":"2026-01-26T10:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.413311 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.413351 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.413360 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.413377 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.413387 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:48Z","lastTransitionTime":"2026-01-26T10:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.515585 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.515652 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.515665 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.515682 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.515695 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:48Z","lastTransitionTime":"2026-01-26T10:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.620044 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.620122 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.620140 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.620215 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.620432 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:48Z","lastTransitionTime":"2026-01-26T10:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.723813 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.723872 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.723883 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.723904 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.723915 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:48Z","lastTransitionTime":"2026-01-26T10:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.827226 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.827265 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.827274 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.827290 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.827303 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:48Z","lastTransitionTime":"2026-01-26T10:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.929793 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.929834 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.929844 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.929863 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:48 crc kubenswrapper[4619]: I0126 10:55:48.929875 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:48Z","lastTransitionTime":"2026-01-26T10:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.034440 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.034479 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.034490 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.034512 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.034525 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:49Z","lastTransitionTime":"2026-01-26T10:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.138006 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.138055 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.138064 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.138083 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.138098 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:49Z","lastTransitionTime":"2026-01-26T10:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.241251 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.241313 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.241337 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.241372 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.241395 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:49Z","lastTransitionTime":"2026-01-26T10:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.246521 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 15:55:44.150827897 +0000 UTC Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.261104 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.261190 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.261123 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:55:49 crc kubenswrapper[4619]: E0126 10:55:49.261317 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:55:49 crc kubenswrapper[4619]: E0126 10:55:49.261557 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:55:49 crc kubenswrapper[4619]: E0126 10:55:49.261769 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.344943 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.345015 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.345031 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.345070 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.345095 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:49Z","lastTransitionTime":"2026-01-26T10:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.448806 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.448871 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.448891 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.448918 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.448938 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:49Z","lastTransitionTime":"2026-01-26T10:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.552392 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.552461 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.552479 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.552508 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.552529 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:49Z","lastTransitionTime":"2026-01-26T10:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.654793 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.654821 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.654829 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.654844 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.654854 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:49Z","lastTransitionTime":"2026-01-26T10:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.758299 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.758338 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.758347 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.758365 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.758376 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:49Z","lastTransitionTime":"2026-01-26T10:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.861252 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.861292 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.861304 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.861324 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.861337 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:49Z","lastTransitionTime":"2026-01-26T10:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.964393 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.964478 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.964497 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.964557 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:49 crc kubenswrapper[4619]: I0126 10:55:49.964583 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:49Z","lastTransitionTime":"2026-01-26T10:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.071766 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.071811 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.071820 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.071836 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.071846 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:50Z","lastTransitionTime":"2026-01-26T10:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.103885 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.114273 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.120909 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040ecfd813bfe1593da976b353abbf4b1e184e4bec225208352164785ed0d685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:50Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.135147 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:50Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.148991 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:50Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.173432 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:50Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.175024 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.175051 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.175060 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.175099 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.175110 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:50Z","lastTransitionTime":"2026-01-26T10:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.190371 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:50Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.202870 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:50Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.214658 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b771a4b98ddb7b088189501e68a744900bc39e69b33ff54e6bbe326218bf25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbb52b66491323889833d0a5db94cf686a9edb6629b5fb0dda213ffef3c8f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m6m7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:50Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.226802 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:50Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.239148 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:50Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.246784 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 12:17:46.486459346 +0000 UTC Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.248733 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:50Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.260515 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:55:50 crc kubenswrapper[4619]: E0126 10:55:50.260682 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.261568 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:50Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.272050 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:50Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.278010 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.278063 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.278078 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.278102 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.278117 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:50Z","lastTransitionTime":"2026-01-26T10:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.298526 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://670ebc2a7a94f116cf3555215d32995f8ce347a2299a6436545a969eac3ca6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670ebc2a7a94f116cf3555215d32995f8ce347a2299a6436545a969eac3ca6c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"Retry object setup: *v1.Pod openshift-multus/multus-684hz\\\\nI0126 10:55:38.711905 5994 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0126 10:55:38.711909 5994 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-684hz\\\\nI0126 10:55:38.711919 5994 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-684hz in node crc\\\\nF0126 10:55:38.711919 5994 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z]\\\\nI0126 10:55:38.711926 5994 obj_retry.go:386] Retry successful for *v1.Pod openshift-mul\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b6xtv_openshift-ovn-kubernetes(9ed93d0d-0709-4425-b378-6b8a15318070)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:50Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.313887 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bs2t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4ef536-778e-47e5-afb2-539e96eba778\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bs2t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:50Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.332756 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:50Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.351070 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:50Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.381261 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.381300 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.381311 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.381332 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.381343 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:50Z","lastTransitionTime":"2026-01-26T10:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.484257 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.484311 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.484322 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.484342 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.484355 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:50Z","lastTransitionTime":"2026-01-26T10:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.587155 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.587256 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.587279 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.587308 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.587326 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:50Z","lastTransitionTime":"2026-01-26T10:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.690853 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.690926 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.690955 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.690974 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.690986 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:50Z","lastTransitionTime":"2026-01-26T10:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.794055 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.794095 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.794103 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.794123 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.794132 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:50Z","lastTransitionTime":"2026-01-26T10:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.896827 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.896871 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.896883 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.896900 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.896913 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:50Z","lastTransitionTime":"2026-01-26T10:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.953428 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:55:50 crc kubenswrapper[4619]: E0126 10:55:50.953593 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:56:22.953571775 +0000 UTC m=+81.987612491 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:55:50 crc kubenswrapper[4619]: I0126 10:55:50.953712 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:50 crc kubenswrapper[4619]: E0126 10:55:50.953827 4619 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 10:55:50 crc kubenswrapper[4619]: E0126 10:55:50.953866 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 10:56:22.953860073 +0000 UTC m=+81.987900789 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.005854 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.005942 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.005960 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.005988 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.006009 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:51Z","lastTransitionTime":"2026-01-26T10:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.055545 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.055651 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.055694 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:51 crc kubenswrapper[4619]: E0126 10:55:51.055788 4619 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 10:55:51 crc kubenswrapper[4619]: E0126 10:55:51.055857 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 10:56:23.055839285 +0000 UTC m=+82.089879991 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 10:55:51 crc kubenswrapper[4619]: E0126 10:55:51.055972 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 10:55:51 crc kubenswrapper[4619]: E0126 10:55:51.056067 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 10:55:51 crc kubenswrapper[4619]: E0126 10:55:51.056075 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 10:55:51 crc kubenswrapper[4619]: E0126 10:55:51.056147 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 10:55:51 crc kubenswrapper[4619]: E0126 10:55:51.056174 4619 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:55:51 crc kubenswrapper[4619]: E0126 10:55:51.056097 4619 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:55:51 crc kubenswrapper[4619]: E0126 10:55:51.056282 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 10:56:23.056240007 +0000 UTC m=+82.090280923 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:55:51 crc kubenswrapper[4619]: E0126 10:55:51.056368 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 10:56:23.056337869 +0000 UTC m=+82.090378625 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.108857 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.108906 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.108918 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.108939 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.108950 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:51Z","lastTransitionTime":"2026-01-26T10:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.211537 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.211585 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.211594 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.211626 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.211639 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:51Z","lastTransitionTime":"2026-01-26T10:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.247182 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 13:08:31.772069656 +0000 UTC Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.260984 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.261044 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:55:51 crc kubenswrapper[4619]: E0126 10:55:51.261132 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.261206 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:51 crc kubenswrapper[4619]: E0126 10:55:51.261304 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:55:51 crc kubenswrapper[4619]: E0126 10:55:51.261462 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.277771 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:51Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.291466 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:51Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.309435 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:51Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.315688 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.315750 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.315762 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.315784 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.315808 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:51Z","lastTransitionTime":"2026-01-26T10:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.325882 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b771a4b98ddb7b088189501e68a744900bc39e69b33ff54e6bbe326218bf25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbb52b66491323889833d0a5db94cf686a9edb6629b5fb0dda213ffef3c8f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m6m7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:51Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.343309 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:51Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.375555 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://670ebc2a7a94f116cf3555215d32995f8ce347a2299a6436545a969eac3ca6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670ebc2a7a94f116cf3555215d32995f8ce347a2299a6436545a969eac3ca6c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"Retry object setup: *v1.Pod openshift-multus/multus-684hz\\\\nI0126 10:55:38.711905 5994 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0126 10:55:38.711909 5994 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-684hz\\\\nI0126 10:55:38.711919 5994 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-684hz in node crc\\\\nF0126 10:55:38.711919 5994 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z]\\\\nI0126 10:55:38.711926 5994 obj_retry.go:386] Retry successful for *v1.Pod openshift-mul\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b6xtv_openshift-ovn-kubernetes(9ed93d0d-0709-4425-b378-6b8a15318070)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:51Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.393787 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bs2t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4ef536-778e-47e5-afb2-539e96eba778\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bs2t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:51Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.418705 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:51Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.419492 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.419700 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.419851 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.419998 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.420138 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:51Z","lastTransitionTime":"2026-01-26T10:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.435565 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f31d0267-7c14-469f-aa35-c62a7e22e236\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684bfdf7352b2c2c2da47372847d8ad2da8f297db21df4a9ee95af1c911ed801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4d0fa82c1e0c7288072c19b175cc433e44b8ec49a1951b3286c032c350d9177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adac1c5d43727ca7872d61a7a205c3cffb45cd818d612abbe66d96158f8e16c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b89d58f8fe9ee1a688f79f658dac138818e547dcccdd952370b2de019f65cb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b89d58f8fe9ee1a688f79f658dac138818e547dcccdd952370b2de019f65cb7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:51Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.457186 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:51Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.471496 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:51Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.491579 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:51Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.512221 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:51Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.523033 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.523110 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.523143 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.523177 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.523201 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:51Z","lastTransitionTime":"2026-01-26T10:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.530276 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040ecfd813bfe1593da976b353abbf4b1e184e4bec225208352164785ed0d685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:51Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.545690 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:51Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.559674 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:51Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.574957 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:51Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.626190 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.626252 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.626272 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.626300 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.626321 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:51Z","lastTransitionTime":"2026-01-26T10:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.729215 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.729252 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.729265 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.729283 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.729295 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:51Z","lastTransitionTime":"2026-01-26T10:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.832222 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.832271 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.832280 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.832298 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.832309 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:51Z","lastTransitionTime":"2026-01-26T10:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.934712 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.934763 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.934774 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.934791 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:51 crc kubenswrapper[4619]: I0126 10:55:51.934803 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:51Z","lastTransitionTime":"2026-01-26T10:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.037371 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.037407 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.037416 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.037431 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.037441 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:52Z","lastTransitionTime":"2026-01-26T10:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.140374 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.140445 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.140462 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.140488 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.140505 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:52Z","lastTransitionTime":"2026-01-26T10:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.244774 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.244820 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.244830 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.244890 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.244905 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:52Z","lastTransitionTime":"2026-01-26T10:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.248415 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 03:47:37.131283937 +0000 UTC Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.260758 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:55:52 crc kubenswrapper[4619]: E0126 10:55:52.261124 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.347663 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.347705 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.347715 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.347734 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.347745 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:52Z","lastTransitionTime":"2026-01-26T10:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.450168 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.450211 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.450220 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.450240 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.450252 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:52Z","lastTransitionTime":"2026-01-26T10:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.553192 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.553222 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.553231 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.553248 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.553258 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:52Z","lastTransitionTime":"2026-01-26T10:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.655225 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.655273 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.655286 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.655305 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.655319 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:52Z","lastTransitionTime":"2026-01-26T10:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.757981 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.758021 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.758032 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.758050 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.758060 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:52Z","lastTransitionTime":"2026-01-26T10:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.860649 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.860695 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.860710 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.860729 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.860743 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:52Z","lastTransitionTime":"2026-01-26T10:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.963238 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.963283 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.963291 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.963312 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:52 crc kubenswrapper[4619]: I0126 10:55:52.963322 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:52Z","lastTransitionTime":"2026-01-26T10:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.037085 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.037132 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.037143 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.037161 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.037174 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:53Z","lastTransitionTime":"2026-01-26T10:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:53 crc kubenswrapper[4619]: E0126 10:55:53.053138 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:53Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.057544 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.057570 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.057579 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.057592 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.057605 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:53Z","lastTransitionTime":"2026-01-26T10:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:53 crc kubenswrapper[4619]: E0126 10:55:53.074946 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:53Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.079461 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.079530 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.079543 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.079586 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.079602 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:53Z","lastTransitionTime":"2026-01-26T10:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:53 crc kubenswrapper[4619]: E0126 10:55:53.097352 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:53Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.101831 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.101867 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.101880 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.101899 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.101914 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:53Z","lastTransitionTime":"2026-01-26T10:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:53 crc kubenswrapper[4619]: E0126 10:55:53.115956 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:53Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.119904 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.119972 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.119982 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.119996 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.120007 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:53Z","lastTransitionTime":"2026-01-26T10:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:53 crc kubenswrapper[4619]: E0126 10:55:53.134559 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:53Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:53 crc kubenswrapper[4619]: E0126 10:55:53.134683 4619 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.136236 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.136254 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.136263 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.136275 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.136285 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:53Z","lastTransitionTime":"2026-01-26T10:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.238962 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.239012 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.239023 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.239043 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.239054 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:53Z","lastTransitionTime":"2026-01-26T10:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.249264 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 10:38:43.517468992 +0000 UTC Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.260526 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.260553 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.260928 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:55:53 crc kubenswrapper[4619]: E0126 10:55:53.260957 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.261133 4619 scope.go:117] "RemoveContainer" containerID="670ebc2a7a94f116cf3555215d32995f8ce347a2299a6436545a969eac3ca6c6" Jan 26 10:55:53 crc kubenswrapper[4619]: E0126 10:55:53.261089 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:55:53 crc kubenswrapper[4619]: E0126 10:55:53.261256 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.341178 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.341568 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.341646 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.341722 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.341788 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:53Z","lastTransitionTime":"2026-01-26T10:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.446040 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.446128 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.446143 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.446161 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.446174 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:53Z","lastTransitionTime":"2026-01-26T10:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.548794 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.548838 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.548849 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.548889 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.548901 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:53Z","lastTransitionTime":"2026-01-26T10:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.649516 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6xtv_9ed93d0d-0709-4425-b378-6b8a15318070/ovnkube-controller/1.log" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.651236 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.651259 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.651267 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.651283 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.651293 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:53Z","lastTransitionTime":"2026-01-26T10:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.653850 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" event={"ID":"9ed93d0d-0709-4425-b378-6b8a15318070","Type":"ContainerStarted","Data":"bb5219d94641fbcd69397a82d03a6089ff09b64490146f6cdf1c2a88e69b648b"} Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.654796 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.669799 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f31d0267-7c14-469f-aa35-c62a7e22e236\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684bfdf7352b2c2c2da47372847d8ad2da8f297db21df4a9ee95af1c911ed801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4d0fa82c1e0c7288072c19b175cc433e44b8ec49a1951b3286c032c350d9177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adac1c5d43727ca7872d61a7a205c3cffb45cd818d612abbe66d96158f8e16c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b89d58f8fe9ee1a688f79f658dac138818e547dcccdd952370b2de019f65cb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b89d58f8fe9ee1a688f79f658dac138818e547dcccdd952370b2de019f65cb7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:53Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.690904 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:53Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.709112 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:53Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.725214 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:53Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.754144 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.754181 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.754191 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.754207 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.754215 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:53Z","lastTransitionTime":"2026-01-26T10:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.755792 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5219d94641fbcd69397a82d03a6089ff09b64490146f6cdf1c2a88e69b648b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670ebc2a7a94f116cf3555215d32995f8ce347a2299a6436545a969eac3ca6c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"Retry object setup: *v1.Pod openshift-multus/multus-684hz\\\\nI0126 10:55:38.711905 5994 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0126 10:55:38.711909 5994 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-684hz\\\\nI0126 10:55:38.711919 5994 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-684hz in node crc\\\\nF0126 10:55:38.711919 5994 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z]\\\\nI0126 10:55:38.711926 5994 obj_retry.go:386] Retry successful for *v1.Pod openshift-mul\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:53Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.767803 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bs2t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4ef536-778e-47e5-afb2-539e96eba778\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bs2t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:53Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.780774 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:53Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.793585 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:53Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.806715 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:53Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.821043 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040ecfd813bfe1593da976b353abbf4b1e184e4bec225208352164785ed0d685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:53Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.833141 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:53Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.844754 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:53Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.855967 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:53Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.856379 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.856407 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.856415 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.856429 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.856437 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:53Z","lastTransitionTime":"2026-01-26T10:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.869326 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:53Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.879887 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:53Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.890685 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b771a4b98ddb7b088189501e68a744900bc39e69b33ff54e6bbe326218bf25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbb52b66491323889833d0a5db94cf686a9edb6629b5fb0dda213ffef3c8f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m6m7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:53Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.903961 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:53Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.958395 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.958427 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.958436 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.958451 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:53 crc kubenswrapper[4619]: I0126 10:55:53.958461 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:53Z","lastTransitionTime":"2026-01-26T10:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.060506 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.060564 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.060578 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.060602 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.060634 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:54Z","lastTransitionTime":"2026-01-26T10:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.164506 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.164550 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.164561 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.164579 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.164588 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:54Z","lastTransitionTime":"2026-01-26T10:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.249764 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 09:48:17.267881097 +0000 UTC Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.260318 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:55:54 crc kubenswrapper[4619]: E0126 10:55:54.260494 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.267063 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.267099 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.267109 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.267126 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.267139 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:54Z","lastTransitionTime":"2026-01-26T10:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.370237 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.370281 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.370289 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.370306 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.370317 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:54Z","lastTransitionTime":"2026-01-26T10:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.472877 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.472928 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.472942 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.472962 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.472976 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:54Z","lastTransitionTime":"2026-01-26T10:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.488507 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a4ef536-778e-47e5-afb2-539e96eba778-metrics-certs\") pod \"network-metrics-daemon-bs2t7\" (UID: \"6a4ef536-778e-47e5-afb2-539e96eba778\") " pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:55:54 crc kubenswrapper[4619]: E0126 10:55:54.488710 4619 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 10:55:54 crc kubenswrapper[4619]: E0126 10:55:54.488800 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a4ef536-778e-47e5-afb2-539e96eba778-metrics-certs podName:6a4ef536-778e-47e5-afb2-539e96eba778 nodeName:}" failed. No retries permitted until 2026-01-26 10:56:10.488772873 +0000 UTC m=+69.522813599 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a4ef536-778e-47e5-afb2-539e96eba778-metrics-certs") pod "network-metrics-daemon-bs2t7" (UID: "6a4ef536-778e-47e5-afb2-539e96eba778") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.576306 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.576363 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.576376 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.576399 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.576414 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:54Z","lastTransitionTime":"2026-01-26T10:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.658925 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6xtv_9ed93d0d-0709-4425-b378-6b8a15318070/ovnkube-controller/2.log" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.659524 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6xtv_9ed93d0d-0709-4425-b378-6b8a15318070/ovnkube-controller/1.log" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.661660 4619 generic.go:334] "Generic (PLEG): container finished" podID="9ed93d0d-0709-4425-b378-6b8a15318070" containerID="bb5219d94641fbcd69397a82d03a6089ff09b64490146f6cdf1c2a88e69b648b" exitCode=1 Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.661710 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" event={"ID":"9ed93d0d-0709-4425-b378-6b8a15318070","Type":"ContainerDied","Data":"bb5219d94641fbcd69397a82d03a6089ff09b64490146f6cdf1c2a88e69b648b"} Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.661754 4619 scope.go:117] "RemoveContainer" containerID="670ebc2a7a94f116cf3555215d32995f8ce347a2299a6436545a969eac3ca6c6" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.669214 4619 scope.go:117] "RemoveContainer" containerID="bb5219d94641fbcd69397a82d03a6089ff09b64490146f6cdf1c2a88e69b648b" Jan 26 10:55:54 crc kubenswrapper[4619]: E0126 10:55:54.669497 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b6xtv_openshift-ovn-kubernetes(9ed93d0d-0709-4425-b378-6b8a15318070)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.680314 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.680473 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.680483 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.680502 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.680512 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:54Z","lastTransitionTime":"2026-01-26T10:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.684529 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:54Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.696942 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:54Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.708947 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:54Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.717761 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b771a4b98ddb7b088189501e68a744900bc39e69b33ff54e6bbe326218bf25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbb52b66491323889833d0a5db94cf686a9edb6629b5fb0dda213ffef3c8f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m6m7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:54Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.726763 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:54Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.735259 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:54Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.742755 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:54Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.753266 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:54Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.766212 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:54Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.782645 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.782688 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.782699 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.782717 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.782729 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:54Z","lastTransitionTime":"2026-01-26T10:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.786694 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5219d94641fbcd69397a82d03a6089ff09b64490146f6cdf1c2a88e69b648b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670ebc2a7a94f116cf3555215d32995f8ce347a2299a6436545a969eac3ca6c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"Retry object setup: *v1.Pod openshift-multus/multus-684hz\\\\nI0126 10:55:38.711905 5994 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0126 10:55:38.711909 5994 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-684hz\\\\nI0126 10:55:38.711919 5994 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-684hz in node crc\\\\nF0126 10:55:38.711919 5994 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:38Z is after 2025-08-24T17:21:41Z]\\\\nI0126 10:55:38.711926 5994 obj_retry.go:386] Retry successful for *v1.Pod openshift-mul\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb5219d94641fbcd69397a82d03a6089ff09b64490146f6cdf1c2a88e69b648b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:55:54Z\\\",\\\"message\\\":\\\"ddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 10:55:54.141246 6183 services_controller.go:444] Built service openshift-multus/multus-admission-controller LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0126 10:55:54.141246 6183 services_controller.go:444] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0126 10:55:54.141258 6183 services_controller.go:445] Built service openshift-multus/multus-admission-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0126 10:55:54.141260 6183 services_controller.go:445] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0126 10:55:54.140587 6183 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:54Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.797000 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bs2t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4ef536-778e-47e5-afb2-539e96eba778\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bs2t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:54Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.810295 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:54Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.820182 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f31d0267-7c14-469f-aa35-c62a7e22e236\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684bfdf7352b2c2c2da47372847d8ad2da8f297db21df4a9ee95af1c911ed801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4d0fa82c1e0c7288072c19b175cc433e44b8ec49a1951b3286c032c350d9177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adac1c5d43727ca7872d61a7a205c3cffb45cd818d612abbe66d96158f8e16c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b89d58f8fe9ee1a688f79f658dac138818e547dcccdd952370b2de019f65cb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b89d58f8fe9ee1a688f79f658dac138818e547dcccdd952370b2de019f65cb7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:54Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.833440 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:54Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.846807 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040ecfd813bfe1593da976b353abbf4b1e184e4bec225208352164785ed0d685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:54Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.885722 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.885763 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.885772 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.885792 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.885803 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:54Z","lastTransitionTime":"2026-01-26T10:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.896293 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:54Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.926402 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:54Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.988435 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.988475 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.988487 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.988505 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:54 crc kubenswrapper[4619]: I0126 10:55:54.988517 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:54Z","lastTransitionTime":"2026-01-26T10:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.091654 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.091698 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.091707 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.091724 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.091736 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:55Z","lastTransitionTime":"2026-01-26T10:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.195809 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.196149 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.196241 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.196341 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.196410 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:55Z","lastTransitionTime":"2026-01-26T10:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.250185 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 07:03:55.271687242 +0000 UTC Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.260705 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.260865 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.260888 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:55:55 crc kubenswrapper[4619]: E0126 10:55:55.261083 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:55:55 crc kubenswrapper[4619]: E0126 10:55:55.261317 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:55:55 crc kubenswrapper[4619]: E0126 10:55:55.261437 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.300000 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.300056 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.300072 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.300102 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.300117 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:55Z","lastTransitionTime":"2026-01-26T10:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.402106 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.402343 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.402443 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.402515 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.402583 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:55Z","lastTransitionTime":"2026-01-26T10:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.505452 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.505520 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.505554 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.505594 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.505656 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:55Z","lastTransitionTime":"2026-01-26T10:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.608108 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.608576 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.608781 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.608922 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.609043 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:55Z","lastTransitionTime":"2026-01-26T10:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.670087 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6xtv_9ed93d0d-0709-4425-b378-6b8a15318070/ovnkube-controller/2.log" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.677934 4619 scope.go:117] "RemoveContainer" containerID="bb5219d94641fbcd69397a82d03a6089ff09b64490146f6cdf1c2a88e69b648b" Jan 26 10:55:55 crc kubenswrapper[4619]: E0126 10:55:55.678234 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b6xtv_openshift-ovn-kubernetes(9ed93d0d-0709-4425-b378-6b8a15318070)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.694132 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:55Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.709143 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:55Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.712525 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.712582 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.712601 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.712657 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.712678 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:55Z","lastTransitionTime":"2026-01-26T10:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.731578 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:55Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.748060 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:55Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.764122 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:55Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.780531 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:55Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.798889 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b771a4b98ddb7b088189501e68a744900bc39e69b33ff54e6bbe326218bf25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbb52b66491323889833d0a5db94cf686a9edb6629b5fb0dda213ffef3c8f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m6m7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:55Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.817663 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.817736 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.817754 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.817785 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.817804 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:55Z","lastTransitionTime":"2026-01-26T10:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.838737 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5219d94641fbcd69397a82d03a6089ff09b64490146f6cdf1c2a88e69b648b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb5219d94641fbcd69397a82d03a6089ff09b64490146f6cdf1c2a88e69b648b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:55:54Z\\\",\\\"message\\\":\\\"ddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 10:55:54.141246 6183 services_controller.go:444] Built service openshift-multus/multus-admission-controller LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0126 10:55:54.141246 6183 services_controller.go:444] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0126 10:55:54.141258 6183 services_controller.go:445] Built service openshift-multus/multus-admission-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0126 10:55:54.141260 6183 services_controller.go:445] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0126 10:55:54.140587 6183 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b6xtv_openshift-ovn-kubernetes(9ed93d0d-0709-4425-b378-6b8a15318070)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:55Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.858989 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bs2t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4ef536-778e-47e5-afb2-539e96eba778\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bs2t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:55Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.883545 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:55Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.901598 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f31d0267-7c14-469f-aa35-c62a7e22e236\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684bfdf7352b2c2c2da47372847d8ad2da8f297db21df4a9ee95af1c911ed801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4d0fa82c1e0c7288072c19b175cc433e44b8ec49a1951b3286c032c350d9177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adac1c5d43727ca7872d61a7a205c3cffb45cd818d612abbe66d96158f8e16c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b89d58f8fe9ee1a688f79f658dac138818e547dcccdd952370b2de019f65cb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b89d58f8fe9ee1a688f79f658dac138818e547dcccdd952370b2de019f65cb7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:55Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.921889 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.921965 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.921990 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.922023 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.922048 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:55Z","lastTransitionTime":"2026-01-26T10:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.922267 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:55Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.936727 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:55Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.951081 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:55Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.966433 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:55Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:55 crc kubenswrapper[4619]: I0126 10:55:55.984904 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:55Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.008441 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040ecfd813bfe1593da976b353abbf4b1e184e4bec225208352164785ed0d685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:55:56Z is after 2025-08-24T17:21:41Z" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.024571 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.024608 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.024647 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.024669 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.024684 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:56Z","lastTransitionTime":"2026-01-26T10:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.128191 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.128297 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.128307 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.128352 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.128369 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:56Z","lastTransitionTime":"2026-01-26T10:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.231825 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.231893 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.231914 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.231962 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.231986 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:56Z","lastTransitionTime":"2026-01-26T10:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.250588 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 01:49:56.524254153 +0000 UTC Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.261036 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:55:56 crc kubenswrapper[4619]: E0126 10:55:56.261302 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.334851 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.334928 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.334953 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.334986 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.335010 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:56Z","lastTransitionTime":"2026-01-26T10:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.438230 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.438298 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.438318 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.438348 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.438366 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:56Z","lastTransitionTime":"2026-01-26T10:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.541558 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.541690 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.541711 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.541737 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.541755 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:56Z","lastTransitionTime":"2026-01-26T10:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.644667 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.645011 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.645080 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.645183 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.645300 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:56Z","lastTransitionTime":"2026-01-26T10:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.748859 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.749169 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.749265 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.749411 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.749511 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:56Z","lastTransitionTime":"2026-01-26T10:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.852005 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.852291 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.852367 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.852457 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.852534 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:56Z","lastTransitionTime":"2026-01-26T10:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.955687 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.955746 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.955760 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.955779 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:56 crc kubenswrapper[4619]: I0126 10:55:56.955794 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:56Z","lastTransitionTime":"2026-01-26T10:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.059139 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.059201 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.059215 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.059236 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.059253 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:57Z","lastTransitionTime":"2026-01-26T10:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.162536 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.163515 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.163964 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.164161 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.164285 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:57Z","lastTransitionTime":"2026-01-26T10:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.251587 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 04:36:46.16305155 +0000 UTC Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.260968 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:55:57 crc kubenswrapper[4619]: E0126 10:55:57.261131 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.261356 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:55:57 crc kubenswrapper[4619]: E0126 10:55:57.261420 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.261529 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:57 crc kubenswrapper[4619]: E0126 10:55:57.261572 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.267211 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.267245 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.267255 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.267271 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.267283 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:57Z","lastTransitionTime":"2026-01-26T10:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.370581 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.370670 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.370689 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.370717 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.370737 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:57Z","lastTransitionTime":"2026-01-26T10:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.474869 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.474924 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.474940 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.474966 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.474983 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:57Z","lastTransitionTime":"2026-01-26T10:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.578185 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.578467 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.578598 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.578729 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.578851 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:57Z","lastTransitionTime":"2026-01-26T10:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.681673 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.681706 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.681715 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.681730 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.681739 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:57Z","lastTransitionTime":"2026-01-26T10:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.784521 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.784604 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.784728 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.784756 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.784775 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:57Z","lastTransitionTime":"2026-01-26T10:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.887010 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.887050 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.887061 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.887082 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.887098 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:57Z","lastTransitionTime":"2026-01-26T10:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.989925 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.990203 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.990322 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.990463 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:57 crc kubenswrapper[4619]: I0126 10:55:57.990530 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:57Z","lastTransitionTime":"2026-01-26T10:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.093595 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.093687 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.093697 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.093714 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.093724 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:58Z","lastTransitionTime":"2026-01-26T10:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.196549 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.196658 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.196669 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.196685 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.196697 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:58Z","lastTransitionTime":"2026-01-26T10:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.252205 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 03:00:25.087897558 +0000 UTC Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.260502 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:55:58 crc kubenswrapper[4619]: E0126 10:55:58.260690 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.299210 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.299273 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.299281 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.299297 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.299308 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:58Z","lastTransitionTime":"2026-01-26T10:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.402447 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.402495 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.402504 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.402522 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.402533 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:58Z","lastTransitionTime":"2026-01-26T10:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.505743 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.505779 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.505788 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.505802 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.505813 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:58Z","lastTransitionTime":"2026-01-26T10:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.609169 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.609400 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.609471 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.609565 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.609658 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:58Z","lastTransitionTime":"2026-01-26T10:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.711755 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.711793 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.711802 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.711817 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.711827 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:58Z","lastTransitionTime":"2026-01-26T10:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.814448 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.814517 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.814538 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.814568 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.814590 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:58Z","lastTransitionTime":"2026-01-26T10:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.918392 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.918453 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.918467 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.918490 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:58 crc kubenswrapper[4619]: I0126 10:55:58.918503 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:58Z","lastTransitionTime":"2026-01-26T10:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.021884 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.022041 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.022083 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.022133 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.022164 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:59Z","lastTransitionTime":"2026-01-26T10:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.126197 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.126244 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.126258 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.126279 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.126291 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:59Z","lastTransitionTime":"2026-01-26T10:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.229200 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.229234 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.229245 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.229263 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.229274 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:59Z","lastTransitionTime":"2026-01-26T10:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.253379 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 11:48:44.30078508 +0000 UTC Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.260830 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.260829 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:55:59 crc kubenswrapper[4619]: E0126 10:55:59.261003 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:55:59 crc kubenswrapper[4619]: E0126 10:55:59.261039 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.260855 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:55:59 crc kubenswrapper[4619]: E0126 10:55:59.261169 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.332268 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.332317 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.332327 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.332348 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.332360 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:59Z","lastTransitionTime":"2026-01-26T10:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.435514 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.435582 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.435600 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.435669 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.435708 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:59Z","lastTransitionTime":"2026-01-26T10:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.539471 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.539519 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.539533 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.539553 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.539569 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:59Z","lastTransitionTime":"2026-01-26T10:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.643835 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.643956 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.643982 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.644016 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.644038 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:59Z","lastTransitionTime":"2026-01-26T10:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.747972 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.748469 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.749148 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.749501 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.749830 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:59Z","lastTransitionTime":"2026-01-26T10:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.853408 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.853746 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.853874 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.853967 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.854062 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:59Z","lastTransitionTime":"2026-01-26T10:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.956800 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.957138 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.957206 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.957276 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:55:59 crc kubenswrapper[4619]: I0126 10:55:59.957337 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:55:59Z","lastTransitionTime":"2026-01-26T10:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.059398 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.059450 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.059467 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.059490 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.059507 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:00Z","lastTransitionTime":"2026-01-26T10:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.162707 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.162776 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.162797 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.162826 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.162848 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:00Z","lastTransitionTime":"2026-01-26T10:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.254775 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 02:08:08.270029722 +0000 UTC Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.261087 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:00 crc kubenswrapper[4619]: E0126 10:56:00.261461 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.267247 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.267434 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.267562 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.267768 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.267919 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:00Z","lastTransitionTime":"2026-01-26T10:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.370604 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.370692 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.370714 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.370738 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.370756 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:00Z","lastTransitionTime":"2026-01-26T10:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.473171 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.473231 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.473249 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.473284 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.473304 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:00Z","lastTransitionTime":"2026-01-26T10:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.576748 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.576812 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.576829 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.576856 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.576872 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:00Z","lastTransitionTime":"2026-01-26T10:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.681275 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.681383 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.681410 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.681442 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.681464 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:00Z","lastTransitionTime":"2026-01-26T10:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.784640 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.785036 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.785113 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.785199 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.785264 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:00Z","lastTransitionTime":"2026-01-26T10:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.889074 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.889147 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.889216 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.889246 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.889264 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:00Z","lastTransitionTime":"2026-01-26T10:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.993114 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.993186 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.993204 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.993234 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:00 crc kubenswrapper[4619]: I0126 10:56:00.993255 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:00Z","lastTransitionTime":"2026-01-26T10:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.097912 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.098028 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.098050 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.098078 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.098097 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:01Z","lastTransitionTime":"2026-01-26T10:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.202120 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.202193 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.202218 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.202249 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.202272 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:01Z","lastTransitionTime":"2026-01-26T10:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.255162 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 12:53:05.164417292 +0000 UTC Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.260977 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.261168 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.261432 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:56:01 crc kubenswrapper[4619]: E0126 10:56:01.261453 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:56:01 crc kubenswrapper[4619]: E0126 10:56:01.262047 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:56:01 crc kubenswrapper[4619]: E0126 10:56:01.262283 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.285538 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:01Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.309117 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:01Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.309263 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.311127 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.311149 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.311174 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.311192 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:01Z","lastTransitionTime":"2026-01-26T10:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.330149 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:01Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.348024 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b771a4b98ddb7b088189501e68a744900bc39e69b33ff54e6bbe326218bf25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbb52b66491323889833d0a5db94cf686a9edb6629b5fb0dda213ffef3c8f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m6m7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:01Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.370454 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:01Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.394490 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5219d94641fbcd69397a82d03a6089ff09b64490146f6cdf1c2a88e69b648b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb5219d94641fbcd69397a82d03a6089ff09b64490146f6cdf1c2a88e69b648b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:55:54Z\\\",\\\"message\\\":\\\"ddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 10:55:54.141246 6183 services_controller.go:444] Built service openshift-multus/multus-admission-controller LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0126 10:55:54.141246 6183 services_controller.go:444] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0126 10:55:54.141258 6183 services_controller.go:445] Built service openshift-multus/multus-admission-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0126 10:55:54.141260 6183 services_controller.go:445] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0126 10:55:54.140587 6183 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b6xtv_openshift-ovn-kubernetes(9ed93d0d-0709-4425-b378-6b8a15318070)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:01Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.407766 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bs2t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4ef536-778e-47e5-afb2-539e96eba778\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bs2t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:01Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.413483 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.413518 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.413527 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.413543 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.413553 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:01Z","lastTransitionTime":"2026-01-26T10:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.428035 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:01Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.439941 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f31d0267-7c14-469f-aa35-c62a7e22e236\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684bfdf7352b2c2c2da47372847d8ad2da8f297db21df4a9ee95af1c911ed801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4d0fa82c1e0c7288072c19b175cc433e44b8ec49a1951b3286c032c350d9177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adac1c5d43727ca7872d61a7a205c3cffb45cd818d612abbe66d96158f8e16c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b89d58f8fe9ee1a688f79f658dac138818e547dcccdd952370b2de019f65cb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b89d58f8fe9ee1a688f79f658dac138818e547dcccdd952370b2de019f65cb7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:01Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.453653 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:01Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.464333 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:01Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.476676 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:01Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.490888 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:01Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.511169 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040ecfd813bfe1593da976b353abbf4b1e184e4bec225208352164785ed0d685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:01Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.517850 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.517899 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.517914 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.517936 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.517952 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:01Z","lastTransitionTime":"2026-01-26T10:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.527117 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:01Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.540584 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:01Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.557736 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:01Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.620777 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.620827 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.620837 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.620854 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.620866 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:01Z","lastTransitionTime":"2026-01-26T10:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.724841 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.724899 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.724915 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.724940 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.724957 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:01Z","lastTransitionTime":"2026-01-26T10:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.828538 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.828683 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.828717 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.828764 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.828792 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:01Z","lastTransitionTime":"2026-01-26T10:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.932443 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.932500 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.932511 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.932531 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:01 crc kubenswrapper[4619]: I0126 10:56:01.932541 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:01Z","lastTransitionTime":"2026-01-26T10:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.036287 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.036355 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.036372 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.036397 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.036411 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:02Z","lastTransitionTime":"2026-01-26T10:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.139991 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.140432 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.140706 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.140896 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.141071 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:02Z","lastTransitionTime":"2026-01-26T10:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.244663 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.245148 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.245360 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.245552 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.245748 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:02Z","lastTransitionTime":"2026-01-26T10:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.256322 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 20:17:51.265774128 +0000 UTC Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.260791 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:02 crc kubenswrapper[4619]: E0126 10:56:02.260964 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.348975 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.349052 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.349074 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.349102 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.349124 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:02Z","lastTransitionTime":"2026-01-26T10:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.452584 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.452662 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.452680 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.452700 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.452713 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:02Z","lastTransitionTime":"2026-01-26T10:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.555299 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.555350 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.555365 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.555383 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.555412 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:02Z","lastTransitionTime":"2026-01-26T10:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.657992 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.658359 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.658458 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.658594 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.658714 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:02Z","lastTransitionTime":"2026-01-26T10:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.761770 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.761848 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.761864 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.761890 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.761915 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:02Z","lastTransitionTime":"2026-01-26T10:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.866171 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.866225 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.866243 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.866269 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.866297 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:02Z","lastTransitionTime":"2026-01-26T10:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.969349 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.969750 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.969940 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.970067 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:02 crc kubenswrapper[4619]: I0126 10:56:02.970155 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:02Z","lastTransitionTime":"2026-01-26T10:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.073659 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.073763 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.073786 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.073819 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.073839 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:03Z","lastTransitionTime":"2026-01-26T10:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.176875 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.176968 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.176994 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.177028 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.177048 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:03Z","lastTransitionTime":"2026-01-26T10:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.257131 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 12:07:48.299966776 +0000 UTC Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.260640 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.260639 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:56:03 crc kubenswrapper[4619]: E0126 10:56:03.260959 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.260693 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:03 crc kubenswrapper[4619]: E0126 10:56:03.261061 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:56:03 crc kubenswrapper[4619]: E0126 10:56:03.261169 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.279208 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.279589 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.279704 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.279799 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.279907 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:03Z","lastTransitionTime":"2026-01-26T10:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.289724 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.289948 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.290101 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.290164 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.290219 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:03Z","lastTransitionTime":"2026-01-26T10:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:03 crc kubenswrapper[4619]: E0126 10:56:03.313097 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:03Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.317712 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.317771 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.317783 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.317801 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.317836 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:03Z","lastTransitionTime":"2026-01-26T10:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:03 crc kubenswrapper[4619]: E0126 10:56:03.330195 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:03Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.334069 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.334102 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.334113 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.334130 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.334144 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:03Z","lastTransitionTime":"2026-01-26T10:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:03 crc kubenswrapper[4619]: E0126 10:56:03.346230 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:03Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.350225 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.350266 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.350309 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.350328 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.350341 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:03Z","lastTransitionTime":"2026-01-26T10:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:03 crc kubenswrapper[4619]: E0126 10:56:03.363241 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:03Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.366772 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.366849 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.366866 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.366894 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.366909 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:03Z","lastTransitionTime":"2026-01-26T10:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:03 crc kubenswrapper[4619]: E0126 10:56:03.380731 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:03Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:03 crc kubenswrapper[4619]: E0126 10:56:03.380915 4619 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.383280 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.383324 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.383340 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.383440 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.383457 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:03Z","lastTransitionTime":"2026-01-26T10:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.487046 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.487112 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.487128 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.487149 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.487165 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:03Z","lastTransitionTime":"2026-01-26T10:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.590405 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.590472 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.590489 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.590517 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.590534 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:03Z","lastTransitionTime":"2026-01-26T10:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.693604 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.693711 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.693735 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.693765 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.693788 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:03Z","lastTransitionTime":"2026-01-26T10:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.796926 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.796972 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.796984 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.797000 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.797014 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:03Z","lastTransitionTime":"2026-01-26T10:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.900816 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.900908 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.900933 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.900966 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:03 crc kubenswrapper[4619]: I0126 10:56:03.900990 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:03Z","lastTransitionTime":"2026-01-26T10:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.004004 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.004063 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.004078 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.004107 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.004122 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:04Z","lastTransitionTime":"2026-01-26T10:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.107730 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.107804 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.107831 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.107860 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.107881 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:04Z","lastTransitionTime":"2026-01-26T10:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.211247 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.211288 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.211301 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.211319 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.211331 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:04Z","lastTransitionTime":"2026-01-26T10:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.258940 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 19:47:07.170582099 +0000 UTC Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.261001 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:04 crc kubenswrapper[4619]: E0126 10:56:04.261193 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.314768 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.314839 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.314857 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.314886 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.314903 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:04Z","lastTransitionTime":"2026-01-26T10:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.418261 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.418298 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.418310 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.418330 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.418342 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:04Z","lastTransitionTime":"2026-01-26T10:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.521311 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.521364 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.521376 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.521397 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.521409 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:04Z","lastTransitionTime":"2026-01-26T10:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.624950 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.625009 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.625031 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.625067 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.625088 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:04Z","lastTransitionTime":"2026-01-26T10:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.727298 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.727550 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.727566 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.727585 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.727597 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:04Z","lastTransitionTime":"2026-01-26T10:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.830374 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.830886 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.830960 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.831026 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.831084 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:04Z","lastTransitionTime":"2026-01-26T10:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.934578 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.934866 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.935079 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.935264 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:04 crc kubenswrapper[4619]: I0126 10:56:04.935449 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:04Z","lastTransitionTime":"2026-01-26T10:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.038553 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.038900 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.038981 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.039053 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.039212 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:05Z","lastTransitionTime":"2026-01-26T10:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.143033 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.143097 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.143112 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.143132 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.143148 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:05Z","lastTransitionTime":"2026-01-26T10:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.246155 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.246449 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.246519 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.247166 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.247252 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:05Z","lastTransitionTime":"2026-01-26T10:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.260430 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 23:22:05.462744235 +0000 UTC Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.260660 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.260702 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:56:05 crc kubenswrapper[4619]: E0126 10:56:05.261270 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.260764 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:56:05 crc kubenswrapper[4619]: E0126 10:56:05.261395 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:56:05 crc kubenswrapper[4619]: E0126 10:56:05.261550 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.350046 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.350491 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.350642 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.350773 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.350891 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:05Z","lastTransitionTime":"2026-01-26T10:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.453973 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.454018 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.454030 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.454070 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.454082 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:05Z","lastTransitionTime":"2026-01-26T10:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.557715 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.557763 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.557774 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.557791 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.557803 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:05Z","lastTransitionTime":"2026-01-26T10:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.660389 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.660453 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.660466 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.660502 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.660515 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:05Z","lastTransitionTime":"2026-01-26T10:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.763683 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.763743 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.763758 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.763784 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.763802 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:05Z","lastTransitionTime":"2026-01-26T10:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.867018 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.867060 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.867072 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.867088 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.867098 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:05Z","lastTransitionTime":"2026-01-26T10:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.970661 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.971180 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.971408 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.971649 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:05 crc kubenswrapper[4619]: I0126 10:56:05.971862 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:05Z","lastTransitionTime":"2026-01-26T10:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.075494 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.075570 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.075584 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.075633 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.075658 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:06Z","lastTransitionTime":"2026-01-26T10:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.178183 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.178225 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.178236 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.178251 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.178262 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:06Z","lastTransitionTime":"2026-01-26T10:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.260234 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:06 crc kubenswrapper[4619]: E0126 10:56:06.260423 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.261154 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 08:16:33.030885698 +0000 UTC Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.280603 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.280894 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.280976 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.281048 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.281118 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:06Z","lastTransitionTime":"2026-01-26T10:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.383779 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.383832 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.383842 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.383861 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.383873 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:06Z","lastTransitionTime":"2026-01-26T10:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.486135 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.486183 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.486198 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.486218 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.486229 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:06Z","lastTransitionTime":"2026-01-26T10:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.589488 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.589988 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.590114 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.590257 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.590409 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:06Z","lastTransitionTime":"2026-01-26T10:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.693760 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.694211 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.694295 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.694391 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.694479 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:06Z","lastTransitionTime":"2026-01-26T10:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.797415 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.797846 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.797951 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.798057 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.798130 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:06Z","lastTransitionTime":"2026-01-26T10:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.900178 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.900213 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.900221 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.900235 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:06 crc kubenswrapper[4619]: I0126 10:56:06.900245 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:06Z","lastTransitionTime":"2026-01-26T10:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.003155 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.003232 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.003245 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.003265 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.003289 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:07Z","lastTransitionTime":"2026-01-26T10:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.106206 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.106244 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.106252 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.106267 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.106278 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:07Z","lastTransitionTime":"2026-01-26T10:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.208327 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.208375 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.208388 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.208409 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.208423 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:07Z","lastTransitionTime":"2026-01-26T10:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.260749 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.260836 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.260842 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:56:07 crc kubenswrapper[4619]: E0126 10:56:07.260915 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:56:07 crc kubenswrapper[4619]: E0126 10:56:07.261106 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.261308 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 14:34:27.703724919 +0000 UTC Jan 26 10:56:07 crc kubenswrapper[4619]: E0126 10:56:07.261380 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.261710 4619 scope.go:117] "RemoveContainer" containerID="bb5219d94641fbcd69397a82d03a6089ff09b64490146f6cdf1c2a88e69b648b" Jan 26 10:56:07 crc kubenswrapper[4619]: E0126 10:56:07.262008 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b6xtv_openshift-ovn-kubernetes(9ed93d0d-0709-4425-b378-6b8a15318070)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.311048 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.311109 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.311125 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.311148 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.311161 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:07Z","lastTransitionTime":"2026-01-26T10:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.414528 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.414580 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.414595 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.414648 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.414662 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:07Z","lastTransitionTime":"2026-01-26T10:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.518023 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.518084 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.518101 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.518126 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.518141 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:07Z","lastTransitionTime":"2026-01-26T10:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.621679 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.621757 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.621778 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.621808 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.621829 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:07Z","lastTransitionTime":"2026-01-26T10:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.725224 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.725337 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.725388 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.725431 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.725680 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:07Z","lastTransitionTime":"2026-01-26T10:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.828937 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.828991 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.829003 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.829021 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.829032 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:07Z","lastTransitionTime":"2026-01-26T10:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.931936 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.931998 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.932023 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.932039 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:07 crc kubenswrapper[4619]: I0126 10:56:07.932057 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:07Z","lastTransitionTime":"2026-01-26T10:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.034584 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.034648 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.034661 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.034679 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.034690 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:08Z","lastTransitionTime":"2026-01-26T10:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.137439 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.137478 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.137492 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.137509 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.137522 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:08Z","lastTransitionTime":"2026-01-26T10:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.240893 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.240941 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.240950 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.240968 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.240979 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:08Z","lastTransitionTime":"2026-01-26T10:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.260309 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:08 crc kubenswrapper[4619]: E0126 10:56:08.260526 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.261566 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 16:54:49.754150348 +0000 UTC Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.344952 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.345000 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.345012 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.345034 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.345047 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:08Z","lastTransitionTime":"2026-01-26T10:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.448252 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.448291 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.448301 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.448315 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.448327 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:08Z","lastTransitionTime":"2026-01-26T10:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.551698 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.551782 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.551799 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.551827 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.551843 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:08Z","lastTransitionTime":"2026-01-26T10:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.654353 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.654395 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.654406 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.654422 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.654434 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:08Z","lastTransitionTime":"2026-01-26T10:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.756802 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.756868 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.756881 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.756897 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.756909 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:08Z","lastTransitionTime":"2026-01-26T10:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.860187 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.860237 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.860247 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.860266 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.860282 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:08Z","lastTransitionTime":"2026-01-26T10:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.963872 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.963922 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.963936 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.963954 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:08 crc kubenswrapper[4619]: I0126 10:56:08.963966 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:08Z","lastTransitionTime":"2026-01-26T10:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.067679 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.067738 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.067751 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.067779 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.067796 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:09Z","lastTransitionTime":"2026-01-26T10:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.170187 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.170470 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.170742 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.170839 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.170933 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:09Z","lastTransitionTime":"2026-01-26T10:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.261895 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:09 crc kubenswrapper[4619]: E0126 10:56:09.263656 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.262932 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:56:09 crc kubenswrapper[4619]: E0126 10:56:09.264206 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.262966 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:56:09 crc kubenswrapper[4619]: E0126 10:56:09.264645 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.262821 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 14:02:07.888164852 +0000 UTC Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.273506 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.273769 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.273945 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.274103 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.274249 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:09Z","lastTransitionTime":"2026-01-26T10:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.377567 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.377634 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.377647 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.377667 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.377680 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:09Z","lastTransitionTime":"2026-01-26T10:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.481378 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.481448 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.481462 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.481487 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.481502 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:09Z","lastTransitionTime":"2026-01-26T10:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.584328 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.584379 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.584390 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.584419 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.584453 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:09Z","lastTransitionTime":"2026-01-26T10:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.687683 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.688206 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.688377 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.688530 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.688697 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:09Z","lastTransitionTime":"2026-01-26T10:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.792110 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.792181 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.792195 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.792215 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.792228 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:09Z","lastTransitionTime":"2026-01-26T10:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.895105 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.895170 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.895187 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.895217 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.895237 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:09Z","lastTransitionTime":"2026-01-26T10:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.997860 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.997916 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.997925 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.997944 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:09 crc kubenswrapper[4619]: I0126 10:56:09.997956 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:09Z","lastTransitionTime":"2026-01-26T10:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.100524 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.100566 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.100575 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.100593 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.100604 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:10Z","lastTransitionTime":"2026-01-26T10:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.203286 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.203586 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.203701 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.203816 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.203921 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:10Z","lastTransitionTime":"2026-01-26T10:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.260449 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:10 crc kubenswrapper[4619]: E0126 10:56:10.260668 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.265530 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 13:19:02.530968538 +0000 UTC Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.325772 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.326063 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.326182 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.326291 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.326388 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:10Z","lastTransitionTime":"2026-01-26T10:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.429018 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.429066 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.429076 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.429093 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.429104 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:10Z","lastTransitionTime":"2026-01-26T10:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.524884 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a4ef536-778e-47e5-afb2-539e96eba778-metrics-certs\") pod \"network-metrics-daemon-bs2t7\" (UID: \"6a4ef536-778e-47e5-afb2-539e96eba778\") " pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:10 crc kubenswrapper[4619]: E0126 10:56:10.525098 4619 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 10:56:10 crc kubenswrapper[4619]: E0126 10:56:10.525436 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a4ef536-778e-47e5-afb2-539e96eba778-metrics-certs podName:6a4ef536-778e-47e5-afb2-539e96eba778 nodeName:}" failed. No retries permitted until 2026-01-26 10:56:42.525417086 +0000 UTC m=+101.559457802 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a4ef536-778e-47e5-afb2-539e96eba778-metrics-certs") pod "network-metrics-daemon-bs2t7" (UID: "6a4ef536-778e-47e5-afb2-539e96eba778") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.531256 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.531321 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.531335 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.531357 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.531370 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:10Z","lastTransitionTime":"2026-01-26T10:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.634036 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.634079 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.634090 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.634108 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.634121 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:10Z","lastTransitionTime":"2026-01-26T10:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.736399 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.736777 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.736843 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.736919 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.736985 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:10Z","lastTransitionTime":"2026-01-26T10:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.840900 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.840958 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.840973 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.840992 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.841006 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:10Z","lastTransitionTime":"2026-01-26T10:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.943949 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.944293 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.944374 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.944477 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:10 crc kubenswrapper[4619]: I0126 10:56:10.944579 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:10Z","lastTransitionTime":"2026-01-26T10:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.047813 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.048246 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.048368 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.048502 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.048669 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:11Z","lastTransitionTime":"2026-01-26T10:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.152133 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.152205 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.152222 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.152247 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.152265 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:11Z","lastTransitionTime":"2026-01-26T10:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.255472 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.255533 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.255545 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.255563 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.255580 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:11Z","lastTransitionTime":"2026-01-26T10:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.260718 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.260779 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:56:11 crc kubenswrapper[4619]: E0126 10:56:11.260857 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.260891 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:56:11 crc kubenswrapper[4619]: E0126 10:56:11.261012 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:56:11 crc kubenswrapper[4619]: E0126 10:56:11.261073 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.266157 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 02:26:20.061110679 +0000 UTC Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.274068 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:11Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.288364 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:11Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.301354 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:11Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.316478 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:11Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.328883 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:11Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.341431 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:11Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.354305 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b771a4b98ddb7b088189501e68a744900bc39e69b33ff54e6bbe326218bf25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbb52b66491323889833d0a5db94cf686a9edb6629b5fb0dda213ffef3c8f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m6m7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:11Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.358068 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.358129 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.358143 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.358163 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.358176 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:11Z","lastTransitionTime":"2026-01-26T10:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.376275 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5219d94641fbcd69397a82d03a6089ff09b64490146f6cdf1c2a88e69b648b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb5219d94641fbcd69397a82d03a6089ff09b64490146f6cdf1c2a88e69b648b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:55:54Z\\\",\\\"message\\\":\\\"ddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 10:55:54.141246 6183 services_controller.go:444] Built service openshift-multus/multus-admission-controller LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0126 10:55:54.141246 6183 services_controller.go:444] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0126 10:55:54.141258 6183 services_controller.go:445] Built service openshift-multus/multus-admission-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0126 10:55:54.141260 6183 services_controller.go:445] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0126 10:55:54.140587 6183 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b6xtv_openshift-ovn-kubernetes(9ed93d0d-0709-4425-b378-6b8a15318070)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:11Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.389911 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bs2t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4ef536-778e-47e5-afb2-539e96eba778\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bs2t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:11Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.406513 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:11Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.418306 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f31d0267-7c14-469f-aa35-c62a7e22e236\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684bfdf7352b2c2c2da47372847d8ad2da8f297db21df4a9ee95af1c911ed801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4d0fa82c1e0c7288072c19b175cc433e44b8ec49a1951b3286c032c350d9177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adac1c5d43727ca7872d61a7a205c3cffb45cd818d612abbe66d96158f8e16c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b89d58f8fe9ee1a688f79f658dac138818e547dcccdd952370b2de019f65cb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b89d58f8fe9ee1a688f79f658dac138818e547dcccdd952370b2de019f65cb7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:11Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.432770 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:11Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.443379 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:11Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.456922 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:11Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.463883 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.463933 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.463944 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.463962 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.463978 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:11Z","lastTransitionTime":"2026-01-26T10:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.472089 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:11Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.482459 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:11Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.495329 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040ecfd813bfe1593da976b353abbf4b1e184e4bec225208352164785ed0d685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:11Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.567211 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.567264 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.567279 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.567303 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.567320 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:11Z","lastTransitionTime":"2026-01-26T10:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.670329 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.670378 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.670387 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.670405 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.670416 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:11Z","lastTransitionTime":"2026-01-26T10:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.773001 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.773045 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.773053 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.773070 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.773084 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:11Z","lastTransitionTime":"2026-01-26T10:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.875885 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.875920 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.875928 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.875942 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.875953 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:11Z","lastTransitionTime":"2026-01-26T10:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.978602 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.978690 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.978703 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.978722 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:11 crc kubenswrapper[4619]: I0126 10:56:11.978735 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:11Z","lastTransitionTime":"2026-01-26T10:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.082016 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.082079 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.082092 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.082112 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.082125 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:12Z","lastTransitionTime":"2026-01-26T10:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.185769 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.186022 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.186043 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.186069 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.186087 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:12Z","lastTransitionTime":"2026-01-26T10:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.260748 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:12 crc kubenswrapper[4619]: E0126 10:56:12.261015 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.267010 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 18:02:26.691898109 +0000 UTC Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.275682 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.289073 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.289390 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.289550 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.289789 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.289955 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:12Z","lastTransitionTime":"2026-01-26T10:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.393791 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.393849 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.393860 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.393883 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.393898 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:12Z","lastTransitionTime":"2026-01-26T10:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.498850 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.498918 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.498940 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.498974 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.499000 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:12Z","lastTransitionTime":"2026-01-26T10:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.602341 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.602394 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.602408 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.602429 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.602442 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:12Z","lastTransitionTime":"2026-01-26T10:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.705532 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.705585 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.705598 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.705643 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.705694 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:12Z","lastTransitionTime":"2026-01-26T10:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.808362 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.808667 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.808774 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.808856 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.808927 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:12Z","lastTransitionTime":"2026-01-26T10:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.911200 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.911252 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.911262 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.911279 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:12 crc kubenswrapper[4619]: I0126 10:56:12.911291 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:12Z","lastTransitionTime":"2026-01-26T10:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.014700 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.014768 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.014779 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.014800 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.014809 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:13Z","lastTransitionTime":"2026-01-26T10:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.117729 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.117763 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.117771 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.117787 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.117797 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:13Z","lastTransitionTime":"2026-01-26T10:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.220383 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.220416 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.220424 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.220597 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.220630 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:13Z","lastTransitionTime":"2026-01-26T10:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.260840 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.261021 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:13 crc kubenswrapper[4619]: E0126 10:56:13.261128 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.261063 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:56:13 crc kubenswrapper[4619]: E0126 10:56:13.261270 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:56:13 crc kubenswrapper[4619]: E0126 10:56:13.261405 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.267719 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 16:05:18.308113986 +0000 UTC Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.322770 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.322818 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.322828 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.322843 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.322856 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:13Z","lastTransitionTime":"2026-01-26T10:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.425510 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.425566 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.425576 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.425598 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.425627 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:13Z","lastTransitionTime":"2026-01-26T10:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.528058 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.528100 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.528112 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.528131 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.528142 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:13Z","lastTransitionTime":"2026-01-26T10:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.631692 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.631752 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.631762 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.631783 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.631795 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:13Z","lastTransitionTime":"2026-01-26T10:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.721177 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.721232 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.721243 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.721264 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.721277 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:13Z","lastTransitionTime":"2026-01-26T10:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:13 crc kubenswrapper[4619]: E0126 10:56:13.735134 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:13Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.740180 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.740215 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.740226 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.740245 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.740257 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:13Z","lastTransitionTime":"2026-01-26T10:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.748635 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-684hz_8aab93f8-6555-4389-b15c-9af458caa339/kube-multus/0.log" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.748696 4619 generic.go:334] "Generic (PLEG): container finished" podID="8aab93f8-6555-4389-b15c-9af458caa339" containerID="31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05" exitCode=1 Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.748730 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-684hz" event={"ID":"8aab93f8-6555-4389-b15c-9af458caa339","Type":"ContainerDied","Data":"31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05"} Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.749117 4619 scope.go:117] "RemoveContainer" containerID="31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05" Jan 26 10:56:13 crc kubenswrapper[4619]: E0126 10:56:13.773364 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:13Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.784901 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5219d94641fbcd69397a82d03a6089ff09b64490146f6cdf1c2a88e69b648b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb5219d94641fbcd69397a82d03a6089ff09b64490146f6cdf1c2a88e69b648b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:55:54Z\\\",\\\"message\\\":\\\"ddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 10:55:54.141246 6183 services_controller.go:444] Built service openshift-multus/multus-admission-controller LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0126 10:55:54.141246 6183 services_controller.go:444] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0126 10:55:54.141258 6183 services_controller.go:445] Built service openshift-multus/multus-admission-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0126 10:55:54.141260 6183 services_controller.go:445] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0126 10:55:54.140587 6183 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b6xtv_openshift-ovn-kubernetes(9ed93d0d-0709-4425-b378-6b8a15318070)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:13Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.786513 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.786568 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.786586 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.786633 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.786658 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:13Z","lastTransitionTime":"2026-01-26T10:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.800067 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bs2t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4ef536-778e-47e5-afb2-539e96eba778\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bs2t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:13Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:13 crc kubenswrapper[4619]: E0126 10:56:13.802340 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:13Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.809523 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.809555 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.809564 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.809579 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.809589 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:13Z","lastTransitionTime":"2026-01-26T10:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.823334 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:13Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:13 crc kubenswrapper[4619]: E0126 10:56:13.826037 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:13Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.837102 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.837147 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.837158 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.837175 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.837186 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:13Z","lastTransitionTime":"2026-01-26T10:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.845640 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f31d0267-7c14-469f-aa35-c62a7e22e236\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684bfdf7352b2c2c2da47372847d8ad2da8f297db21df4a9ee95af1c911ed801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4d0fa82c1e0c7288072c19b175cc433e44b8ec49a1951b3286c032c350d9177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adac1c5d43727ca7872d61a7a205c3cffb45cd818d612abbe66d96158f8e16c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b89d58f8fe9ee1a688f79f658dac138818e547dcccdd952370b2de019f65cb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b89d58f8fe9ee1a688f79f658dac138818e547dcccdd952370b2de019f65cb7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:13Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:13 crc kubenswrapper[4619]: E0126 10:56:13.856347 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:13Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:13 crc kubenswrapper[4619]: E0126 10:56:13.856459 4619 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.858156 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.858199 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.858214 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.858230 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.858268 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:13Z","lastTransitionTime":"2026-01-26T10:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.862792 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:13Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.877136 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:13Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.892475 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:13Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.907930 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:13Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.925090 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:13Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.939345 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040ecfd813bfe1593da976b353abbf4b1e184e4bec225208352164785ed0d685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:13Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.953993 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8da9fdd-1a7d-4adb-80ba-3bcaef1892ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b899b71eba7b32e4ac82dc4f861658da5dd6fad9b21cdd49df50c6687cfcc90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://450dddfc293d70b41641eda8ca7227b7f19bc8b253c718744224cccdf97a1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://450dddfc293d70b41641eda8ca7227b7f19bc8b253c718744224cccdf97a1c98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:13Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.961215 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.961581 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.961737 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.961868 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.961986 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:13Z","lastTransitionTime":"2026-01-26T10:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.967249 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:13Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.980110 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:13Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:13 crc kubenswrapper[4619]: I0126 10:56:13.999664 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:56:12Z\\\",\\\"message\\\":\\\"2026-01-26T10:55:27+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_31827aa7-227d-40e4-8d24-0c50fdb78eea\\\\n2026-01-26T10:55:27+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_31827aa7-227d-40e4-8d24-0c50fdb78eea to /host/opt/cni/bin/\\\\n2026-01-26T10:55:27Z [verbose] multus-daemon started\\\\n2026-01-26T10:55:27Z [verbose] Readiness Indicator file check\\\\n2026-01-26T10:56:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:13Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.013476 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:14Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.027313 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:14Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.040561 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:14Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.053765 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b771a4b98ddb7b088189501e68a744900bc39e69b33ff54e6bbe326218bf25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbb52b66491323889833d0a5db94cf686a9edb6629b5fb0dda213ffef3c8f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m6m7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:14Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.065068 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.065113 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.065127 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.065148 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.065160 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:14Z","lastTransitionTime":"2026-01-26T10:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.207530 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.207815 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.207995 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.208170 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.208300 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:14Z","lastTransitionTime":"2026-01-26T10:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.260564 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:14 crc kubenswrapper[4619]: E0126 10:56:14.260785 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.268461 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 17:04:27.1512371 +0000 UTC Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.311376 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.312058 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.312211 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.312355 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.312492 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:14Z","lastTransitionTime":"2026-01-26T10:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.416129 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.416191 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.416200 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.416222 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.416234 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:14Z","lastTransitionTime":"2026-01-26T10:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.519512 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.519573 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.519585 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.519607 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.519635 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:14Z","lastTransitionTime":"2026-01-26T10:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.621940 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.622005 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.622019 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.622035 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.622046 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:14Z","lastTransitionTime":"2026-01-26T10:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.725026 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.725070 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.725078 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.725095 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.725105 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:14Z","lastTransitionTime":"2026-01-26T10:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.753679 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-684hz_8aab93f8-6555-4389-b15c-9af458caa339/kube-multus/0.log" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.753737 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-684hz" event={"ID":"8aab93f8-6555-4389-b15c-9af458caa339","Type":"ContainerStarted","Data":"bd2e081103d0219f3feb25e53258b23eb64cafeb27bf5b0c0c62ac1f92015406"} Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.769887 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:14Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.781830 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:14Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.804458 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040ecfd813bfe1593da976b353abbf4b1e184e4bec225208352164785ed0d685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:14Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.821075 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:14Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.827929 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.827969 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.827986 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.828006 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.828018 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:14Z","lastTransitionTime":"2026-01-26T10:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.847837 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:14Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.864759 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd2e081103d0219f3feb25e53258b23eb64cafeb27bf5b0c0c62ac1f92015406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:56:12Z\\\",\\\"message\\\":\\\"2026-01-26T10:55:27+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_31827aa7-227d-40e4-8d24-0c50fdb78eea\\\\n2026-01-26T10:55:27+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_31827aa7-227d-40e4-8d24-0c50fdb78eea to /host/opt/cni/bin/\\\\n2026-01-26T10:55:27Z [verbose] multus-daemon started\\\\n2026-01-26T10:55:27Z [verbose] Readiness Indicator file check\\\\n2026-01-26T10:56:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:14Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.884446 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8da9fdd-1a7d-4adb-80ba-3bcaef1892ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b899b71eba7b32e4ac82dc4f861658da5dd6fad9b21cdd49df50c6687cfcc90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://450dddfc293d70b41641eda8ca7227b7f19bc8b253c718744224cccdf97a1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://450dddfc293d70b41641eda8ca7227b7f19bc8b253c718744224cccdf97a1c98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:14Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.899506 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:14Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.913736 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:14Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.931455 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.931522 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.931546 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.931577 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.931599 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:14Z","lastTransitionTime":"2026-01-26T10:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.932220 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b771a4b98ddb7b088189501e68a744900bc39e69b33ff54e6bbe326218bf25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbb52b66491323889833d0a5db94cf686a9edb6629b5fb0dda213ffef3c8f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m6m7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:14Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.947131 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:14Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.963475 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f31d0267-7c14-469f-aa35-c62a7e22e236\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684bfdf7352b2c2c2da47372847d8ad2da8f297db21df4a9ee95af1c911ed801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4d0fa82c1e0c7288072c19b175cc433e44b8ec49a1951b3286c032c350d9177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adac1c5d43727ca7872d61a7a205c3cffb45cd818d612abbe66d96158f8e16c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b89d58f8fe9ee1a688f79f658dac138818e547dcccdd952370b2de019f65cb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b89d58f8fe9ee1a688f79f658dac138818e547dcccdd952370b2de019f65cb7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:14Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:14 crc kubenswrapper[4619]: I0126 10:56:14.983016 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:14Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.000693 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:14Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.017316 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:15Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.034587 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.034638 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.034647 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.034662 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.034674 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:15Z","lastTransitionTime":"2026-01-26T10:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.038860 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5219d94641fbcd69397a82d03a6089ff09b64490146f6cdf1c2a88e69b648b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb5219d94641fbcd69397a82d03a6089ff09b64490146f6cdf1c2a88e69b648b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:55:54Z\\\",\\\"message\\\":\\\"ddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 10:55:54.141246 6183 services_controller.go:444] Built service openshift-multus/multus-admission-controller LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0126 10:55:54.141246 6183 services_controller.go:444] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0126 10:55:54.141258 6183 services_controller.go:445] Built service openshift-multus/multus-admission-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0126 10:55:54.141260 6183 services_controller.go:445] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0126 10:55:54.140587 6183 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b6xtv_openshift-ovn-kubernetes(9ed93d0d-0709-4425-b378-6b8a15318070)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:15Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.056259 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bs2t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4ef536-778e-47e5-afb2-539e96eba778\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bs2t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:15Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.074237 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:15Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.137057 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.137096 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.137132 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.137151 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.137163 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:15Z","lastTransitionTime":"2026-01-26T10:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.239662 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.239734 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.239753 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.239775 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.239794 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:15Z","lastTransitionTime":"2026-01-26T10:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.265476 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:15 crc kubenswrapper[4619]: E0126 10:56:15.265667 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.265996 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:56:15 crc kubenswrapper[4619]: E0126 10:56:15.266104 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.266316 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:56:15 crc kubenswrapper[4619]: E0126 10:56:15.266410 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.269483 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 08:12:40.952040812 +0000 UTC Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.342782 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.342841 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.342864 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.342895 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.342919 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:15Z","lastTransitionTime":"2026-01-26T10:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.446339 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.446423 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.446448 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.446489 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.446514 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:15Z","lastTransitionTime":"2026-01-26T10:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.550461 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.550547 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.550576 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.550609 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.550670 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:15Z","lastTransitionTime":"2026-01-26T10:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.654391 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.654462 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.654477 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.654498 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.654511 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:15Z","lastTransitionTime":"2026-01-26T10:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.757165 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.757221 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.757234 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.757254 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.757268 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:15Z","lastTransitionTime":"2026-01-26T10:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.859551 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.859605 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.859638 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.859656 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.859666 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:15Z","lastTransitionTime":"2026-01-26T10:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.963984 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.964047 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.964068 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.964097 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:15 crc kubenswrapper[4619]: I0126 10:56:15.964114 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:15Z","lastTransitionTime":"2026-01-26T10:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.067536 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.067588 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.067604 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.067667 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.067688 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:16Z","lastTransitionTime":"2026-01-26T10:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.171608 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.171718 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.171743 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.171776 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.171800 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:16Z","lastTransitionTime":"2026-01-26T10:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.260875 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:16 crc kubenswrapper[4619]: E0126 10:56:16.261108 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.270116 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:56:54.674750587 +0000 UTC Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.275262 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.275322 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.275367 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.275395 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.275411 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:16Z","lastTransitionTime":"2026-01-26T10:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.378349 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.378415 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.378427 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.378443 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.378454 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:16Z","lastTransitionTime":"2026-01-26T10:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.481207 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.481264 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.481281 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.481311 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.481329 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:16Z","lastTransitionTime":"2026-01-26T10:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.585099 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.585173 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.585193 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.585223 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.585241 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:16Z","lastTransitionTime":"2026-01-26T10:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.688687 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.688759 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.688771 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.688792 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.688805 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:16Z","lastTransitionTime":"2026-01-26T10:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.792040 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.792146 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.792166 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.792196 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.792216 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:16Z","lastTransitionTime":"2026-01-26T10:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.895493 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.895584 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.895603 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.895680 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.895699 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:16Z","lastTransitionTime":"2026-01-26T10:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.998775 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.998843 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.998855 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.998881 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:16 crc kubenswrapper[4619]: I0126 10:56:16.998894 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:16Z","lastTransitionTime":"2026-01-26T10:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.103287 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.103357 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.103379 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.103413 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.103437 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:17Z","lastTransitionTime":"2026-01-26T10:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.206413 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.206541 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.206559 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.206578 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.206590 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:17Z","lastTransitionTime":"2026-01-26T10:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.261958 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:56:17 crc kubenswrapper[4619]: E0126 10:56:17.262225 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.262699 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:17 crc kubenswrapper[4619]: E0126 10:56:17.262861 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.263592 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:56:17 crc kubenswrapper[4619]: E0126 10:56:17.263778 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.270751 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 22:18:05.950022745 +0000 UTC Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.309632 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.309690 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.309704 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.309728 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.309744 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:17Z","lastTransitionTime":"2026-01-26T10:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.412700 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.412774 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.412817 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.412854 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.412875 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:17Z","lastTransitionTime":"2026-01-26T10:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.517873 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.517941 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.517963 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.517989 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.518007 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:17Z","lastTransitionTime":"2026-01-26T10:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.621247 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.621322 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.621340 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.621365 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.621382 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:17Z","lastTransitionTime":"2026-01-26T10:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.724298 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.724370 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.724391 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.724418 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.724469 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:17Z","lastTransitionTime":"2026-01-26T10:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.827480 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.827573 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.827587 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.827610 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.827659 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:17Z","lastTransitionTime":"2026-01-26T10:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.931099 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.931177 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.931194 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.931220 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:17 crc kubenswrapper[4619]: I0126 10:56:17.931240 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:17Z","lastTransitionTime":"2026-01-26T10:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.034603 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.034733 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.034759 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.034793 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.034824 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:18Z","lastTransitionTime":"2026-01-26T10:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.138040 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.138125 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.138151 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.138183 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.138203 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:18Z","lastTransitionTime":"2026-01-26T10:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.241934 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.242010 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.242037 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.242071 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.242097 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:18Z","lastTransitionTime":"2026-01-26T10:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.260640 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:18 crc kubenswrapper[4619]: E0126 10:56:18.261157 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.261583 4619 scope.go:117] "RemoveContainer" containerID="bb5219d94641fbcd69397a82d03a6089ff09b64490146f6cdf1c2a88e69b648b" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.271823 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 14:45:38.186769272 +0000 UTC Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.344968 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.345003 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.345015 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.345033 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.345046 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:18Z","lastTransitionTime":"2026-01-26T10:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.447262 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.447321 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.447333 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.447356 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.447370 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:18Z","lastTransitionTime":"2026-01-26T10:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.550795 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.550839 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.550851 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.550870 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.550882 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:18Z","lastTransitionTime":"2026-01-26T10:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.653173 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.653210 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.653217 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.653231 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.653240 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:18Z","lastTransitionTime":"2026-01-26T10:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.755644 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.755693 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.755703 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.755722 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.755733 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:18Z","lastTransitionTime":"2026-01-26T10:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.769883 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6xtv_9ed93d0d-0709-4425-b378-6b8a15318070/ovnkube-controller/2.log" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.772238 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" event={"ID":"9ed93d0d-0709-4425-b378-6b8a15318070","Type":"ContainerStarted","Data":"7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea"} Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.772868 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.790897 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:18Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.804601 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:18Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.817517 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:18Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.831905 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b771a4b98ddb7b088189501e68a744900bc39e69b33ff54e6bbe326218bf25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbb52b66491323889833d0a5db94cf686a9edb6629b5fb0dda213ffef3c8f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m6m7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:18Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.848376 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:18Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.858359 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.858402 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.858413 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.858432 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.858445 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:18Z","lastTransitionTime":"2026-01-26T10:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.870396 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f31d0267-7c14-469f-aa35-c62a7e22e236\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684bfdf7352b2c2c2da47372847d8ad2da8f297db21df4a9ee95af1c911ed801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4d0fa82c1e0c7288072c19b175cc433e44b8ec49a1951b3286c032c350d9177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adac1c5d43727ca7872d61a7a205c3cffb45cd818d612abbe66d96158f8e16c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b89d58f8fe9ee1a688f79f658dac138818e547dcccdd952370b2de019f65cb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b89d58f8fe9ee1a688f79f658dac138818e547dcccdd952370b2de019f65cb7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:18Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.888585 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:18Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.906899 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:18Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.921932 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:18Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.956674 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb5219d94641fbcd69397a82d03a6089ff09b64490146f6cdf1c2a88e69b648b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:55:54Z\\\",\\\"message\\\":\\\"ddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 10:55:54.141246 6183 services_controller.go:444] Built service openshift-multus/multus-admission-controller LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0126 10:55:54.141246 6183 services_controller.go:444] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0126 10:55:54.141258 6183 services_controller.go:445] Built service openshift-multus/multus-admission-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0126 10:55:54.141260 6183 services_controller.go:445] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0126 10:55:54.140587 6183 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:56:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:18Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.961707 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.961747 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.961758 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.961774 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.961786 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:18Z","lastTransitionTime":"2026-01-26T10:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.975889 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bs2t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4ef536-778e-47e5-afb2-539e96eba778\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bs2t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:18Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:18 crc kubenswrapper[4619]: I0126 10:56:18.992228 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:18Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.007802 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.023956 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040ecfd813bfe1593da976b353abbf4b1e184e4bec225208352164785ed0d685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.034484 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8da9fdd-1a7d-4adb-80ba-3bcaef1892ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b899b71eba7b32e4ac82dc4f861658da5dd6fad9b21cdd49df50c6687cfcc90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://450dddfc293d70b41641eda8ca7227b7f19bc8b253c718744224cccdf97a1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://450dddfc293d70b41641eda8ca7227b7f19bc8b253c718744224cccdf97a1c98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.047058 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.059588 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.064691 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.064748 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.064763 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.064784 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.064799 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:19Z","lastTransitionTime":"2026-01-26T10:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.078563 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd2e081103d0219f3feb25e53258b23eb64cafeb27bf5b0c0c62ac1f92015406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:56:12Z\\\",\\\"message\\\":\\\"2026-01-26T10:55:27+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_31827aa7-227d-40e4-8d24-0c50fdb78eea\\\\n2026-01-26T10:55:27+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_31827aa7-227d-40e4-8d24-0c50fdb78eea to /host/opt/cni/bin/\\\\n2026-01-26T10:55:27Z [verbose] multus-daemon started\\\\n2026-01-26T10:55:27Z [verbose] Readiness Indicator file check\\\\n2026-01-26T10:56:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.167442 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.167496 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.167506 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.167533 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.167545 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:19Z","lastTransitionTime":"2026-01-26T10:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.260589 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.260673 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:19 crc kubenswrapper[4619]: E0126 10:56:19.260880 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:56:19 crc kubenswrapper[4619]: E0126 10:56:19.260932 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.261164 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:56:19 crc kubenswrapper[4619]: E0126 10:56:19.261422 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.270499 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.270554 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.270568 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.270587 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.270601 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:19Z","lastTransitionTime":"2026-01-26T10:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.272939 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 01:32:24.342480634 +0000 UTC Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.374006 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.374068 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.374086 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.374115 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.374134 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:19Z","lastTransitionTime":"2026-01-26T10:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.477279 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.477418 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.477437 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.477464 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.477485 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:19Z","lastTransitionTime":"2026-01-26T10:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.580896 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.581348 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.581441 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.581527 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.581595 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:19Z","lastTransitionTime":"2026-01-26T10:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.685461 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.685519 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.685528 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.685547 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.685562 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:19Z","lastTransitionTime":"2026-01-26T10:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.780312 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6xtv_9ed93d0d-0709-4425-b378-6b8a15318070/ovnkube-controller/3.log" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.781550 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6xtv_9ed93d0d-0709-4425-b378-6b8a15318070/ovnkube-controller/2.log" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.787074 4619 generic.go:334] "Generic (PLEG): container finished" podID="9ed93d0d-0709-4425-b378-6b8a15318070" containerID="7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea" exitCode=1 Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.787174 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" event={"ID":"9ed93d0d-0709-4425-b378-6b8a15318070","Type":"ContainerDied","Data":"7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea"} Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.787255 4619 scope.go:117] "RemoveContainer" containerID="bb5219d94641fbcd69397a82d03a6089ff09b64490146f6cdf1c2a88e69b648b" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.788143 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.788199 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.788218 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.788246 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.788266 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:19Z","lastTransitionTime":"2026-01-26T10:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.788734 4619 scope.go:117] "RemoveContainer" containerID="7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea" Jan 26 10:56:19 crc kubenswrapper[4619]: E0126 10:56:19.789179 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b6xtv_openshift-ovn-kubernetes(9ed93d0d-0709-4425-b378-6b8a15318070)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.817521 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.838418 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.853183 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.870141 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b771a4b98ddb7b088189501e68a744900bc39e69b33ff54e6bbe326218bf25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbb52b66491323889833d0a5db94cf686a9edb6629b5fb0dda213ffef3c8f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m6m7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.888084 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bs2t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4ef536-778e-47e5-afb2-539e96eba778\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bs2t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.891708 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.891778 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.891798 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.891827 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.891846 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:19Z","lastTransitionTime":"2026-01-26T10:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.905678 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.925987 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f31d0267-7c14-469f-aa35-c62a7e22e236\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684bfdf7352b2c2c2da47372847d8ad2da8f297db21df4a9ee95af1c911ed801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4d0fa82c1e0c7288072c19b175cc433e44b8ec49a1951b3286c032c350d9177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adac1c5d43727ca7872d61a7a205c3cffb45cd818d612abbe66d96158f8e16c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b89d58f8fe9ee1a688f79f658dac138818e547dcccdd952370b2de019f65cb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b89d58f8fe9ee1a688f79f658dac138818e547dcccdd952370b2de019f65cb7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.949396 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.965144 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.981129 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:19Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.994725 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.994798 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.994816 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.994843 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:19 crc kubenswrapper[4619]: I0126 10:56:19.994863 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:19Z","lastTransitionTime":"2026-01-26T10:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.013493 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb5219d94641fbcd69397a82d03a6089ff09b64490146f6cdf1c2a88e69b648b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:55:54Z\\\",\\\"message\\\":\\\"ddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 10:55:54.141246 6183 services_controller.go:444] Built service openshift-multus/multus-admission-controller LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0126 10:55:54.141246 6183 services_controller.go:444] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0126 10:55:54.141258 6183 services_controller.go:445] Built service openshift-multus/multus-admission-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0126 10:55:54.141260 6183 services_controller.go:445] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0126 10:55:54.140587 6183 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:56:19Z\\\",\\\"message\\\":\\\"ng{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}\\\\nI0126 10:56:19.139475 6549 services_controller.go:360] Finished syncing service package-server-manager-metrics on namespace openshift-operator-lifecycle-manager for network=default : 15.769045ms\\\\nI0126 10:56:19.139484 6549 services_controller.go:360] Finished syncing service metrics on namespace openshift-config-operator for network=default : 18.136497ms\\\\nI0126 10:56:19.139477 6549 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nI0126 10:56:19.139437 6549 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler-operator/metrics\\\\\\\"}\\\\nI0126 10:56:19.139502 6549 services_controller.go:360] Finished syncing service networking-console-plugin on namespace openshift-network-console for network=default : 11.757579ms\\\\nI0126 10:56:19.139515 6549 services_controller.go:360] Finished syncing service metrics on namespace openshift-kube-scheduler-operator for network=default : 13.205347ms\\\\nI0126 10:56:19.139854 6549 ovnkube.go:599] Stopped ovnkube\\\\nI0126 10:56:19.139886 6549 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 10:56:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:56:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.037937 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.054581 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.076099 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040ecfd813bfe1593da976b353abbf4b1e184e4bec225208352164785ed0d685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.094208 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8da9fdd-1a7d-4adb-80ba-3bcaef1892ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b899b71eba7b32e4ac82dc4f861658da5dd6fad9b21cdd49df50c6687cfcc90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://450dddfc293d70b41641eda8ca7227b7f19bc8b253c718744224cccdf97a1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://450dddfc293d70b41641eda8ca7227b7f19bc8b253c718744224cccdf97a1c98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.098084 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.098130 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.098147 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.098173 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.098190 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:20Z","lastTransitionTime":"2026-01-26T10:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.114278 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.140425 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.163697 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd2e081103d0219f3feb25e53258b23eb64cafeb27bf5b0c0c62ac1f92015406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:56:12Z\\\",\\\"message\\\":\\\"2026-01-26T10:55:27+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_31827aa7-227d-40e4-8d24-0c50fdb78eea\\\\n2026-01-26T10:55:27+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_31827aa7-227d-40e4-8d24-0c50fdb78eea to /host/opt/cni/bin/\\\\n2026-01-26T10:55:27Z [verbose] multus-daemon started\\\\n2026-01-26T10:55:27Z [verbose] Readiness Indicator file check\\\\n2026-01-26T10:56:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.201598 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.201752 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.201773 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.201799 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.201817 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:20Z","lastTransitionTime":"2026-01-26T10:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.261031 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:20 crc kubenswrapper[4619]: E0126 10:56:20.261363 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.273729 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 01:02:08.328114125 +0000 UTC Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.305193 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.305284 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.305297 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.305321 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.305334 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:20Z","lastTransitionTime":"2026-01-26T10:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.408414 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.408878 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.409038 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.409213 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.409376 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:20Z","lastTransitionTime":"2026-01-26T10:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.513555 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.513695 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.513723 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.513759 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.513785 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:20Z","lastTransitionTime":"2026-01-26T10:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.617609 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.617729 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.617746 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.617775 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.617793 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:20Z","lastTransitionTime":"2026-01-26T10:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.721114 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.721190 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.721217 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.721246 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.721268 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:20Z","lastTransitionTime":"2026-01-26T10:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.796066 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6xtv_9ed93d0d-0709-4425-b378-6b8a15318070/ovnkube-controller/3.log" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.804323 4619 scope.go:117] "RemoveContainer" containerID="7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea" Jan 26 10:56:20 crc kubenswrapper[4619]: E0126 10:56:20.804693 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b6xtv_openshift-ovn-kubernetes(9ed93d0d-0709-4425-b378-6b8a15318070)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.823736 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.826016 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.826157 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.826183 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.826215 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.826238 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:20Z","lastTransitionTime":"2026-01-26T10:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.842493 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.857294 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.871994 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b771a4b98ddb7b088189501e68a744900bc39e69b33ff54e6bbe326218bf25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbb52b66491323889833d0a5db94cf686a9edb6629b5fb0dda213ffef3c8f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m6m7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.895375 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:56:19Z\\\",\\\"message\\\":\\\"ng{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}\\\\nI0126 10:56:19.139475 6549 services_controller.go:360] Finished syncing service package-server-manager-metrics on namespace openshift-operator-lifecycle-manager for network=default : 15.769045ms\\\\nI0126 10:56:19.139484 6549 services_controller.go:360] Finished syncing service metrics on namespace openshift-config-operator for network=default : 18.136497ms\\\\nI0126 10:56:19.139477 6549 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nI0126 10:56:19.139437 6549 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler-operator/metrics\\\\\\\"}\\\\nI0126 10:56:19.139502 6549 services_controller.go:360] Finished syncing service networking-console-plugin on namespace openshift-network-console for network=default : 11.757579ms\\\\nI0126 10:56:19.139515 6549 services_controller.go:360] Finished syncing service metrics on namespace openshift-kube-scheduler-operator for network=default : 13.205347ms\\\\nI0126 10:56:19.139854 6549 ovnkube.go:599] Stopped ovnkube\\\\nI0126 10:56:19.139886 6549 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 10:56:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:56:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b6xtv_openshift-ovn-kubernetes(9ed93d0d-0709-4425-b378-6b8a15318070)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.909498 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bs2t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4ef536-778e-47e5-afb2-539e96eba778\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bs2t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.925930 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.929816 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.929865 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.929878 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.929900 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.929916 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:20Z","lastTransitionTime":"2026-01-26T10:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.941307 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f31d0267-7c14-469f-aa35-c62a7e22e236\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684bfdf7352b2c2c2da47372847d8ad2da8f297db21df4a9ee95af1c911ed801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4d0fa82c1e0c7288072c19b175cc433e44b8ec49a1951b3286c032c350d9177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adac1c5d43727ca7872d61a7a205c3cffb45cd818d612abbe66d96158f8e16c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b89d58f8fe9ee1a688f79f658dac138818e547dcccdd952370b2de019f65cb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b89d58f8fe9ee1a688f79f658dac138818e547dcccdd952370b2de019f65cb7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.955017 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.968847 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.983241 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:20 crc kubenswrapper[4619]: I0126 10:56:20.998812 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:20Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.014012 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.031797 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040ecfd813bfe1593da976b353abbf4b1e184e4bec225208352164785ed0d685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.032099 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.032157 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.032171 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.032189 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.032203 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:21Z","lastTransitionTime":"2026-01-26T10:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.049551 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8da9fdd-1a7d-4adb-80ba-3bcaef1892ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b899b71eba7b32e4ac82dc4f861658da5dd6fad9b21cdd49df50c6687cfcc90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://450dddfc293d70b41641eda8ca7227b7f19bc8b253c718744224cccdf97a1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://450dddfc293d70b41641eda8ca7227b7f19bc8b253c718744224cccdf97a1c98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.066515 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.086576 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.103331 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd2e081103d0219f3feb25e53258b23eb64cafeb27bf5b0c0c62ac1f92015406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:56:12Z\\\",\\\"message\\\":\\\"2026-01-26T10:55:27+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_31827aa7-227d-40e4-8d24-0c50fdb78eea\\\\n2026-01-26T10:55:27+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_31827aa7-227d-40e4-8d24-0c50fdb78eea to /host/opt/cni/bin/\\\\n2026-01-26T10:55:27Z [verbose] multus-daemon started\\\\n2026-01-26T10:55:27Z [verbose] Readiness Indicator file check\\\\n2026-01-26T10:56:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.134755 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.134817 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.134826 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.134841 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.134869 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:21Z","lastTransitionTime":"2026-01-26T10:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.237346 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.237402 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.237416 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.237439 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.237456 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:21Z","lastTransitionTime":"2026-01-26T10:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.261014 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.261104 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:56:21 crc kubenswrapper[4619]: E0126 10:56:21.261311 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.261326 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:21 crc kubenswrapper[4619]: E0126 10:56:21.261361 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:56:21 crc kubenswrapper[4619]: E0126 10:56:21.261474 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.275108 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 07:42:06.447589153 +0000 UTC Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.280638 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.289957 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.301417 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.339951 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.340001 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.340016 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.340034 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.340045 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:21Z","lastTransitionTime":"2026-01-26T10:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.339353 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:56:19Z\\\",\\\"message\\\":\\\"ng{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}\\\\nI0126 10:56:19.139475 6549 services_controller.go:360] Finished syncing service package-server-manager-metrics on namespace openshift-operator-lifecycle-manager for network=default : 15.769045ms\\\\nI0126 10:56:19.139484 6549 services_controller.go:360] Finished syncing service metrics on namespace openshift-config-operator for network=default : 18.136497ms\\\\nI0126 10:56:19.139477 6549 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nI0126 10:56:19.139437 6549 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler-operator/metrics\\\\\\\"}\\\\nI0126 10:56:19.139502 6549 services_controller.go:360] Finished syncing service networking-console-plugin on namespace openshift-network-console for network=default : 11.757579ms\\\\nI0126 10:56:19.139515 6549 services_controller.go:360] Finished syncing service metrics on namespace openshift-kube-scheduler-operator for network=default : 13.205347ms\\\\nI0126 10:56:19.139854 6549 ovnkube.go:599] Stopped ovnkube\\\\nI0126 10:56:19.139886 6549 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 10:56:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:56:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b6xtv_openshift-ovn-kubernetes(9ed93d0d-0709-4425-b378-6b8a15318070)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.387249 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bs2t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4ef536-778e-47e5-afb2-539e96eba778\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bs2t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.408499 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.423888 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f31d0267-7c14-469f-aa35-c62a7e22e236\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684bfdf7352b2c2c2da47372847d8ad2da8f297db21df4a9ee95af1c911ed801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4d0fa82c1e0c7288072c19b175cc433e44b8ec49a1951b3286c032c350d9177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adac1c5d43727ca7872d61a7a205c3cffb45cd818d612abbe66d96158f8e16c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b89d58f8fe9ee1a688f79f658dac138818e547dcccdd952370b2de019f65cb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b89d58f8fe9ee1a688f79f658dac138818e547dcccdd952370b2de019f65cb7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.438846 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.442525 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.442557 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.442566 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.442580 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.442591 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:21Z","lastTransitionTime":"2026-01-26T10:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.456638 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040ecfd813bfe1593da976b353abbf4b1e184e4bec225208352164785ed0d685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.471654 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.484907 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.497565 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd2e081103d0219f3feb25e53258b23eb64cafeb27bf5b0c0c62ac1f92015406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:56:12Z\\\",\\\"message\\\":\\\"2026-01-26T10:55:27+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_31827aa7-227d-40e4-8d24-0c50fdb78eea\\\\n2026-01-26T10:55:27+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_31827aa7-227d-40e4-8d24-0c50fdb78eea to /host/opt/cni/bin/\\\\n2026-01-26T10:55:27Z [verbose] multus-daemon started\\\\n2026-01-26T10:55:27Z [verbose] Readiness Indicator file check\\\\n2026-01-26T10:56:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.508588 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8da9fdd-1a7d-4adb-80ba-3bcaef1892ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b899b71eba7b32e4ac82dc4f861658da5dd6fad9b21cdd49df50c6687cfcc90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://450dddfc293d70b41641eda8ca7227b7f19bc8b253c718744224cccdf97a1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://450dddfc293d70b41641eda8ca7227b7f19bc8b253c718744224cccdf97a1c98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.523994 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.537783 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.544853 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.544892 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.544904 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.544919 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.544927 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:21Z","lastTransitionTime":"2026-01-26T10:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.558244 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b771a4b98ddb7b088189501e68a744900bc39e69b33ff54e6bbe326218bf25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbb52b66491323889833d0a5db94cf686a9edb6629b5fb0dda213ffef3c8f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m6m7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.573560 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.586715 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:21Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.649484 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.649545 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.649555 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.649574 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.649585 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:21Z","lastTransitionTime":"2026-01-26T10:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.753197 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.753586 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.753595 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.753631 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.753643 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:21Z","lastTransitionTime":"2026-01-26T10:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.856864 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.856933 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.856943 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.856963 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.856974 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:21Z","lastTransitionTime":"2026-01-26T10:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.959948 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.960018 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.960028 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.960051 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:21 crc kubenswrapper[4619]: I0126 10:56:21.960062 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:21Z","lastTransitionTime":"2026-01-26T10:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.062512 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.062592 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.062646 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.062679 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.062700 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:22Z","lastTransitionTime":"2026-01-26T10:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.166322 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.166380 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.166392 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.166411 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.166446 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:22Z","lastTransitionTime":"2026-01-26T10:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.260844 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:22 crc kubenswrapper[4619]: E0126 10:56:22.261144 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.269290 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.269363 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.269384 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.269414 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.269433 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:22Z","lastTransitionTime":"2026-01-26T10:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.275773 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 05:02:21.982857679 +0000 UTC Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.373177 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.373256 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.373276 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.373304 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.373323 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:22Z","lastTransitionTime":"2026-01-26T10:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.476816 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.476890 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.476916 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.476950 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.476972 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:22Z","lastTransitionTime":"2026-01-26T10:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.580311 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.580435 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.580471 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.580503 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.580523 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:22Z","lastTransitionTime":"2026-01-26T10:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.683765 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.683827 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.683842 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.683868 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.683888 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:22Z","lastTransitionTime":"2026-01-26T10:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.787267 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.787317 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.787333 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.787356 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.787373 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:22Z","lastTransitionTime":"2026-01-26T10:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.890799 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.890920 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.890946 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.890974 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.890992 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:22Z","lastTransitionTime":"2026-01-26T10:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.994429 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.994536 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.994554 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.994586 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:22 crc kubenswrapper[4619]: I0126 10:56:22.994606 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:22Z","lastTransitionTime":"2026-01-26T10:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.013959 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.014277 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:23 crc kubenswrapper[4619]: E0126 10:56:23.014497 4619 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 10:56:23 crc kubenswrapper[4619]: E0126 10:56:23.014590 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 10:57:27.014565529 +0000 UTC m=+146.048606275 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 10:56:23 crc kubenswrapper[4619]: E0126 10:56:23.014877 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:27.014862877 +0000 UTC m=+146.048903623 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.099836 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.099900 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.099919 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.099947 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.099968 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:23Z","lastTransitionTime":"2026-01-26T10:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.115063 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.115132 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.115185 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:56:23 crc kubenswrapper[4619]: E0126 10:56:23.115381 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 10:56:23 crc kubenswrapper[4619]: E0126 10:56:23.115412 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 10:56:23 crc kubenswrapper[4619]: E0126 10:56:23.115434 4619 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:56:23 crc kubenswrapper[4619]: E0126 10:56:23.115503 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 10:57:27.115481075 +0000 UTC m=+146.149521831 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:56:23 crc kubenswrapper[4619]: E0126 10:56:23.115824 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 10:56:23 crc kubenswrapper[4619]: E0126 10:56:23.115866 4619 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 10:56:23 crc kubenswrapper[4619]: E0126 10:56:23.115883 4619 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:56:23 crc kubenswrapper[4619]: E0126 10:56:23.115880 4619 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 10:56:23 crc kubenswrapper[4619]: E0126 10:56:23.115934 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 10:57:27.115916686 +0000 UTC m=+146.149957432 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 10:56:23 crc kubenswrapper[4619]: E0126 10:56:23.115981 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 10:57:27.115954737 +0000 UTC m=+146.149995463 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.203269 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.203329 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.203340 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.203363 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.203379 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:23Z","lastTransitionTime":"2026-01-26T10:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.260951 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.261033 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:23 crc kubenswrapper[4619]: E0126 10:56:23.261184 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:56:23 crc kubenswrapper[4619]: E0126 10:56:23.261407 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.261549 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:56:23 crc kubenswrapper[4619]: E0126 10:56:23.261679 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.276755 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 19:39:03.901080204 +0000 UTC Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.306842 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.307222 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.307381 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.307534 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.307704 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:23Z","lastTransitionTime":"2026-01-26T10:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.410670 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.410731 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.410748 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.410772 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.410792 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:23Z","lastTransitionTime":"2026-01-26T10:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.514323 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.514380 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.514404 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.514438 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.514461 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:23Z","lastTransitionTime":"2026-01-26T10:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.617659 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.617797 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.617820 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.617849 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.617866 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:23Z","lastTransitionTime":"2026-01-26T10:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.721326 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.721389 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.721447 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.721478 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.721496 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:23Z","lastTransitionTime":"2026-01-26T10:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.823685 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.823746 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.823758 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.823783 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.823797 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:23Z","lastTransitionTime":"2026-01-26T10:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.926180 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.926236 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.926251 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.926272 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.926286 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:23Z","lastTransitionTime":"2026-01-26T10:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.945062 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.945146 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.945169 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.945203 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.945230 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:23Z","lastTransitionTime":"2026-01-26T10:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:23 crc kubenswrapper[4619]: E0126 10:56:23.962923 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:23Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.968018 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.968066 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.968084 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.968110 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.968129 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:23Z","lastTransitionTime":"2026-01-26T10:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:23 crc kubenswrapper[4619]: E0126 10:56:23.992376 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:23Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.998358 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.998400 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.998416 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.998439 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:23 crc kubenswrapper[4619]: I0126 10:56:23.998455 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:23Z","lastTransitionTime":"2026-01-26T10:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:24 crc kubenswrapper[4619]: E0126 10:56:24.017860 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:24Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.024100 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.024170 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.024196 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.024230 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.024256 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:24Z","lastTransitionTime":"2026-01-26T10:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:24 crc kubenswrapper[4619]: E0126 10:56:24.049582 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:24Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.055074 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.055133 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.055152 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.055179 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.055199 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:24Z","lastTransitionTime":"2026-01-26T10:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:24 crc kubenswrapper[4619]: E0126 10:56:24.073931 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:24Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:24 crc kubenswrapper[4619]: E0126 10:56:24.074300 4619 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.077084 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.077148 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.077177 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.077224 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.077252 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:24Z","lastTransitionTime":"2026-01-26T10:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.180806 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.180883 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.180897 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.180919 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.180937 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:24Z","lastTransitionTime":"2026-01-26T10:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.260732 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:24 crc kubenswrapper[4619]: E0126 10:56:24.261314 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.277287 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 13:01:23.023965859 +0000 UTC Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.284452 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.284503 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.284516 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.284540 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.284555 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:24Z","lastTransitionTime":"2026-01-26T10:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.285073 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.388931 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.389002 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.389020 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.389049 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.389068 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:24Z","lastTransitionTime":"2026-01-26T10:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.499795 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.499847 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.499860 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.499880 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.499895 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:24Z","lastTransitionTime":"2026-01-26T10:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.603942 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.604013 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.604039 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.604070 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.604094 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:24Z","lastTransitionTime":"2026-01-26T10:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.707530 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.707604 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.707650 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.707679 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.707699 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:24Z","lastTransitionTime":"2026-01-26T10:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.810904 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.810979 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.810997 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.811026 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.811046 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:24Z","lastTransitionTime":"2026-01-26T10:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.915591 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.915674 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.915691 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.915712 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:24 crc kubenswrapper[4619]: I0126 10:56:24.915727 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:24Z","lastTransitionTime":"2026-01-26T10:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.023490 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.023560 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.023576 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.023603 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.023764 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:25Z","lastTransitionTime":"2026-01-26T10:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.126468 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.126508 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.126516 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.126534 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.126546 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:25Z","lastTransitionTime":"2026-01-26T10:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.229916 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.229951 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.229959 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.229974 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.229983 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:25Z","lastTransitionTime":"2026-01-26T10:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.261972 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.261971 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:25 crc kubenswrapper[4619]: E0126 10:56:25.262200 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.262288 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:56:25 crc kubenswrapper[4619]: E0126 10:56:25.262371 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:56:25 crc kubenswrapper[4619]: E0126 10:56:25.262491 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.277876 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 19:02:34.094634438 +0000 UTC Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.332701 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.332774 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.332791 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.332817 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.332836 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:25Z","lastTransitionTime":"2026-01-26T10:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.437083 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.437157 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.437370 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.437402 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.437424 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:25Z","lastTransitionTime":"2026-01-26T10:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.540849 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.540914 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.540934 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.540959 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.540976 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:25Z","lastTransitionTime":"2026-01-26T10:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.645802 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.645853 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.645866 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.645886 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.645900 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:25Z","lastTransitionTime":"2026-01-26T10:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.749689 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.749752 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.749769 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.749794 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.749811 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:25Z","lastTransitionTime":"2026-01-26T10:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.851872 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.851927 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.851936 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.851952 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.851962 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:25Z","lastTransitionTime":"2026-01-26T10:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.955603 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.955727 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.955753 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.955786 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:25 crc kubenswrapper[4619]: I0126 10:56:25.955809 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:25Z","lastTransitionTime":"2026-01-26T10:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.059214 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.059291 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.059313 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.059342 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.059371 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:26Z","lastTransitionTime":"2026-01-26T10:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.162147 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.162192 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.162203 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.162222 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.162235 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:26Z","lastTransitionTime":"2026-01-26T10:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.260506 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:26 crc kubenswrapper[4619]: E0126 10:56:26.260704 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.264950 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.265011 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.265051 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.265074 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.265089 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:26Z","lastTransitionTime":"2026-01-26T10:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.278187 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 01:08:04.81162838 +0000 UTC Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.368888 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.368952 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.368962 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.368980 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.368992 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:26Z","lastTransitionTime":"2026-01-26T10:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.472510 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.472579 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.472602 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.472668 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.472688 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:26Z","lastTransitionTime":"2026-01-26T10:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.576591 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.576679 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.576697 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.576723 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.576747 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:26Z","lastTransitionTime":"2026-01-26T10:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.679862 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.679927 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.679946 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.679970 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.679988 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:26Z","lastTransitionTime":"2026-01-26T10:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.783712 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.783786 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.783807 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.783836 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.783853 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:26Z","lastTransitionTime":"2026-01-26T10:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.886947 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.887022 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.887047 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.887076 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.887095 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:26Z","lastTransitionTime":"2026-01-26T10:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.990173 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.990239 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.990255 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.990280 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:26 crc kubenswrapper[4619]: I0126 10:56:26.990303 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:26Z","lastTransitionTime":"2026-01-26T10:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.094119 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.094174 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.094184 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.094211 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.094222 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:27Z","lastTransitionTime":"2026-01-26T10:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.196543 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.196589 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.196600 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.196644 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.196656 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:27Z","lastTransitionTime":"2026-01-26T10:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.260609 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.260712 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.260713 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:56:27 crc kubenswrapper[4619]: E0126 10:56:27.261025 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:56:27 crc kubenswrapper[4619]: E0126 10:56:27.261227 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:56:27 crc kubenswrapper[4619]: E0126 10:56:27.261565 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.278560 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 07:38:03.022615154 +0000 UTC Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.300160 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.300221 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.300236 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.300257 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.300269 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:27Z","lastTransitionTime":"2026-01-26T10:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.404729 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.404838 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.404873 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.404910 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.404962 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:27Z","lastTransitionTime":"2026-01-26T10:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.508363 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.508414 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.508427 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.508454 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.508477 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:27Z","lastTransitionTime":"2026-01-26T10:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.612412 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.612488 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.612506 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.612535 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.612557 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:27Z","lastTransitionTime":"2026-01-26T10:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.715775 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.715826 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.716062 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.716083 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.716095 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:27Z","lastTransitionTime":"2026-01-26T10:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.819180 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.819248 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.819271 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.819303 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.819326 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:27Z","lastTransitionTime":"2026-01-26T10:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.922917 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.922987 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.923006 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.923034 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:27 crc kubenswrapper[4619]: I0126 10:56:27.923052 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:27Z","lastTransitionTime":"2026-01-26T10:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.028442 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.028536 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.028557 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.028579 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.028594 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:28Z","lastTransitionTime":"2026-01-26T10:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.131644 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.131697 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.131712 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.131732 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.131746 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:28Z","lastTransitionTime":"2026-01-26T10:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.235534 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.235598 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.235654 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.235681 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.235698 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:28Z","lastTransitionTime":"2026-01-26T10:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.260210 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:28 crc kubenswrapper[4619]: E0126 10:56:28.260444 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.279801 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 11:36:17.467459074 +0000 UTC Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.339086 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.339140 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.339151 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.339168 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.339183 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:28Z","lastTransitionTime":"2026-01-26T10:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.443285 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.443363 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.443386 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.443418 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.443442 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:28Z","lastTransitionTime":"2026-01-26T10:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.546528 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.546571 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.546584 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.546604 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.546634 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:28Z","lastTransitionTime":"2026-01-26T10:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.650322 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.650395 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.650415 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.650441 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.650460 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:28Z","lastTransitionTime":"2026-01-26T10:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.753875 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.753983 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.754006 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.754036 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.754062 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:28Z","lastTransitionTime":"2026-01-26T10:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.857192 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.857258 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.857279 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.857305 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.857325 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:28Z","lastTransitionTime":"2026-01-26T10:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.960086 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.960171 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.960195 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.960226 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:28 crc kubenswrapper[4619]: I0126 10:56:28.960251 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:28Z","lastTransitionTime":"2026-01-26T10:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.063740 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.063812 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.063839 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.063873 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.063894 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:29Z","lastTransitionTime":"2026-01-26T10:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.176721 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.176772 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.176787 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.176809 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.176823 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:29Z","lastTransitionTime":"2026-01-26T10:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.260133 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.260291 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:29 crc kubenswrapper[4619]: E0126 10:56:29.260366 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.260404 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:56:29 crc kubenswrapper[4619]: E0126 10:56:29.260666 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:56:29 crc kubenswrapper[4619]: E0126 10:56:29.260727 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.279818 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.279861 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.279872 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.279889 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.279907 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:29Z","lastTransitionTime":"2026-01-26T10:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.279928 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 21:32:45.855974813 +0000 UTC Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.382962 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.383026 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.383061 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.383090 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.383111 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:29Z","lastTransitionTime":"2026-01-26T10:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.486167 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.486218 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.486230 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.486255 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.486266 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:29Z","lastTransitionTime":"2026-01-26T10:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.589636 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.589693 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.589703 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.589725 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.589736 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:29Z","lastTransitionTime":"2026-01-26T10:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.693235 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.693295 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.693305 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.693321 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.693334 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:29Z","lastTransitionTime":"2026-01-26T10:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.797004 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.797072 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.797089 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.797109 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.797125 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:29Z","lastTransitionTime":"2026-01-26T10:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.900937 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.901015 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.901032 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.901060 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:29 crc kubenswrapper[4619]: I0126 10:56:29.901079 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:29Z","lastTransitionTime":"2026-01-26T10:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.004388 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.004472 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.004486 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.004515 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.004535 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:30Z","lastTransitionTime":"2026-01-26T10:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.107061 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.107101 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.107118 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.107174 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.107188 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:30Z","lastTransitionTime":"2026-01-26T10:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.209955 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.210507 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.210524 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.210550 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.210566 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:30Z","lastTransitionTime":"2026-01-26T10:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.260399 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:30 crc kubenswrapper[4619]: E0126 10:56:30.260575 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.280537 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 21:59:27.819547708 +0000 UTC Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.312455 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.312506 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.312519 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.312539 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.312552 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:30Z","lastTransitionTime":"2026-01-26T10:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.415722 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.415778 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.415790 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.415812 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.415853 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:30Z","lastTransitionTime":"2026-01-26T10:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.518717 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.518787 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.518805 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.518845 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.518907 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:30Z","lastTransitionTime":"2026-01-26T10:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.622687 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.622748 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.622766 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.622791 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.622811 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:30Z","lastTransitionTime":"2026-01-26T10:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.726771 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.726897 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.726942 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.727022 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.727052 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:30Z","lastTransitionTime":"2026-01-26T10:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.831222 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.831283 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.831301 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.831329 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.831348 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:30Z","lastTransitionTime":"2026-01-26T10:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.935160 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.935263 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.935284 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.935315 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:30 crc kubenswrapper[4619]: I0126 10:56:30.935333 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:30Z","lastTransitionTime":"2026-01-26T10:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.039535 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.039647 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.039668 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.039700 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.039718 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:31Z","lastTransitionTime":"2026-01-26T10:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.142275 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.142338 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.142352 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.142380 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.142399 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:31Z","lastTransitionTime":"2026-01-26T10:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.245469 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.245520 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.245532 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.245553 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.245567 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:31Z","lastTransitionTime":"2026-01-26T10:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.261059 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.261144 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:56:31 crc kubenswrapper[4619]: E0126 10:56:31.261302 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.261319 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:31 crc kubenswrapper[4619]: E0126 10:56:31.261486 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:56:31 crc kubenswrapper[4619]: E0126 10:56:31.261796 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.276259 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8da9fdd-1a7d-4adb-80ba-3bcaef1892ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b899b71eba7b32e4ac82dc4f861658da5dd6fad9b21cdd49df50c6687cfcc90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://450dddfc293d70b41641eda8ca7227b7f19bc8b253c718744224cccdf97a1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://450dddfc293d70b41641eda8ca7227b7f19bc8b253c718744224cccdf97a1c98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.280784 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 03:25:26.622380386 +0000 UTC Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.295355 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.312387 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.332234 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd2e081103d0219f3feb25e53258b23eb64cafeb27bf5b0c0c62ac1f92015406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:56:12Z\\\",\\\"message\\\":\\\"2026-01-26T10:55:27+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_31827aa7-227d-40e4-8d24-0c50fdb78eea\\\\n2026-01-26T10:55:27+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_31827aa7-227d-40e4-8d24-0c50fdb78eea to /host/opt/cni/bin/\\\\n2026-01-26T10:55:27Z [verbose] multus-daemon started\\\\n2026-01-26T10:55:27Z [verbose] Readiness Indicator file check\\\\n2026-01-26T10:56:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.349123 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.349162 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.349171 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.349187 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.349201 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:31Z","lastTransitionTime":"2026-01-26T10:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.349684 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.363913 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.380548 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.395732 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b771a4b98ddb7b088189501e68a744900bc39e69b33ff54e6bbe326218bf25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbb52b66491323889833d0a5db94cf686a9edb6629b5fb0dda213ffef3c8f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m6m7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.415367 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:56:19Z\\\",\\\"message\\\":\\\"ng{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}\\\\nI0126 10:56:19.139475 6549 services_controller.go:360] Finished syncing service package-server-manager-metrics on namespace openshift-operator-lifecycle-manager for network=default : 15.769045ms\\\\nI0126 10:56:19.139484 6549 services_controller.go:360] Finished syncing service metrics on namespace openshift-config-operator for network=default : 18.136497ms\\\\nI0126 10:56:19.139477 6549 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nI0126 10:56:19.139437 6549 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler-operator/metrics\\\\\\\"}\\\\nI0126 10:56:19.139502 6549 services_controller.go:360] Finished syncing service networking-console-plugin on namespace openshift-network-console for network=default : 11.757579ms\\\\nI0126 10:56:19.139515 6549 services_controller.go:360] Finished syncing service metrics on namespace openshift-kube-scheduler-operator for network=default : 13.205347ms\\\\nI0126 10:56:19.139854 6549 ovnkube.go:599] Stopped ovnkube\\\\nI0126 10:56:19.139886 6549 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 10:56:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:56:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b6xtv_openshift-ovn-kubernetes(9ed93d0d-0709-4425-b378-6b8a15318070)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.429842 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bs2t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4ef536-778e-47e5-afb2-539e96eba778\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bs2t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.445900 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.451602 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.451686 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.451709 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.451747 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.451769 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:31Z","lastTransitionTime":"2026-01-26T10:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.458680 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f31d0267-7c14-469f-aa35-c62a7e22e236\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684bfdf7352b2c2c2da47372847d8ad2da8f297db21df4a9ee95af1c911ed801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4d0fa82c1e0c7288072c19b175cc433e44b8ec49a1951b3286c032c350d9177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adac1c5d43727ca7872d61a7a205c3cffb45cd818d612abbe66d96158f8e16c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b89d58f8fe9ee1a688f79f658dac138818e547dcccdd952370b2de019f65cb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b89d58f8fe9ee1a688f79f658dac138818e547dcccdd952370b2de019f65cb7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.474346 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.487021 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.501132 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.522880 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7def49c6-e144-42b7-8f36-0625f1d34565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec7e9ee0bab7dd1ad1b12a3a8ad86ae68690e864d79c8804d8a3ad55cc63cd03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136e39d9b7d5eff7086e9983574a1c22186a06d9c4d2b5d566d74202749f487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca5e9ec76892ec00b79ecafda52f33afe0fcfe3b40bceaa76e586d95a62d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6071a43e179915476d4051e159c82a42544006d424aa36a5a81ef4efea75823b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f0b03f072d78d4ff1bac8a10e9522f3b9af4121b08380e95a58b18e64fade8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a98a2217c238f895c59c00102801bd7233787f028671e513b8c21927ae8afa6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98a2217c238f895c59c00102801bd7233787f028671e513b8c21927ae8afa6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469cc8b055a2e99577efc86d33f62f22c6390d4947b75f779b26f2a63875af68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469cc8b055a2e99577efc86d33f62f22c6390d4947b75f779b26f2a63875af68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://41190c5262d846fb6eb8ca1f8eb63f62a081076ba648214f4584878190352d56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41190c5262d846fb6eb8ca1f8eb63f62a081076ba648214f4584878190352d56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.536469 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.548548 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.553817 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.553877 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.553888 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.553907 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.553921 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:31Z","lastTransitionTime":"2026-01-26T10:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.563110 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040ecfd813bfe1593da976b353abbf4b1e184e4bec225208352164785ed0d685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:31Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.656739 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.656823 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.656841 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.656869 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.656887 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:31Z","lastTransitionTime":"2026-01-26T10:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.759273 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.759334 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.759352 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.759388 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.759411 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:31Z","lastTransitionTime":"2026-01-26T10:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.863352 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.863873 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.864163 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.864400 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.864916 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:31Z","lastTransitionTime":"2026-01-26T10:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.969361 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.969439 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.969457 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.969487 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:31 crc kubenswrapper[4619]: I0126 10:56:31.969506 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:31Z","lastTransitionTime":"2026-01-26T10:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.072599 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.072678 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.072690 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.072710 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.072723 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:32Z","lastTransitionTime":"2026-01-26T10:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.176358 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.176437 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.176459 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.176491 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.176515 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:32Z","lastTransitionTime":"2026-01-26T10:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.260702 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:32 crc kubenswrapper[4619]: E0126 10:56:32.260930 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.279306 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.279372 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.279390 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.279416 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.279435 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:32Z","lastTransitionTime":"2026-01-26T10:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.281392 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 20:15:33.85914314 +0000 UTC Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.382469 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.382517 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.382528 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.382546 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.382560 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:32Z","lastTransitionTime":"2026-01-26T10:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.485590 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.485706 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.485730 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.485757 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.485775 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:32Z","lastTransitionTime":"2026-01-26T10:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.589393 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.589467 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.589492 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.589527 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.589555 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:32Z","lastTransitionTime":"2026-01-26T10:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.693576 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.693777 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.693798 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.693828 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.693861 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:32Z","lastTransitionTime":"2026-01-26T10:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.797722 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.797791 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.797815 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.797849 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.797876 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:32Z","lastTransitionTime":"2026-01-26T10:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.899956 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.900027 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.900041 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.900061 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:32 crc kubenswrapper[4619]: I0126 10:56:32.900074 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:32Z","lastTransitionTime":"2026-01-26T10:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.003345 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.003506 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.003536 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.003565 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.003586 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:33Z","lastTransitionTime":"2026-01-26T10:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.107207 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.107295 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.107317 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.107356 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.107384 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:33Z","lastTransitionTime":"2026-01-26T10:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.210985 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.211068 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.211092 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.211126 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.211150 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:33Z","lastTransitionTime":"2026-01-26T10:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.260912 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.260954 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.261731 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:56:33 crc kubenswrapper[4619]: E0126 10:56:33.261948 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:56:33 crc kubenswrapper[4619]: E0126 10:56:33.262070 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:56:33 crc kubenswrapper[4619]: E0126 10:56:33.262243 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.262343 4619 scope.go:117] "RemoveContainer" containerID="7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea" Jan 26 10:56:33 crc kubenswrapper[4619]: E0126 10:56:33.262607 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b6xtv_openshift-ovn-kubernetes(9ed93d0d-0709-4425-b378-6b8a15318070)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.281643 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 14:41:26.067685336 +0000 UTC Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.315529 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.315590 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.315601 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.315868 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.315886 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:33Z","lastTransitionTime":"2026-01-26T10:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.419077 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.419149 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.419173 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.419200 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.419220 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:33Z","lastTransitionTime":"2026-01-26T10:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.523660 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.523733 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.523751 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.523780 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.523803 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:33Z","lastTransitionTime":"2026-01-26T10:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.627177 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.627242 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.627260 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.627283 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.627300 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:33Z","lastTransitionTime":"2026-01-26T10:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.730698 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.730761 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.730777 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.730862 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.730882 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:33Z","lastTransitionTime":"2026-01-26T10:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.834774 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.834856 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.834872 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.834898 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.834915 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:33Z","lastTransitionTime":"2026-01-26T10:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.938306 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.938364 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.938381 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.938407 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:33 crc kubenswrapper[4619]: I0126 10:56:33.938424 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:33Z","lastTransitionTime":"2026-01-26T10:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.042039 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.042095 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.042111 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.042132 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.042145 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:34Z","lastTransitionTime":"2026-01-26T10:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.146304 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.146389 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.146411 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.146447 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.146471 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:34Z","lastTransitionTime":"2026-01-26T10:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.259392 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.259483 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.259505 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.259533 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.259553 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:34Z","lastTransitionTime":"2026-01-26T10:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.260762 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:34 crc kubenswrapper[4619]: E0126 10:56:34.261109 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.262611 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.262722 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.262747 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.262777 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.262798 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:34Z","lastTransitionTime":"2026-01-26T10:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.281948 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 16:37:32.026931175 +0000 UTC Jan 26 10:56:34 crc kubenswrapper[4619]: E0126 10:56:34.288825 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:34Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.295009 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.295070 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.295095 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.295123 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.295148 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:34Z","lastTransitionTime":"2026-01-26T10:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:34 crc kubenswrapper[4619]: E0126 10:56:34.316559 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:34Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.322083 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.322133 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.322152 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.322175 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.322193 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:34Z","lastTransitionTime":"2026-01-26T10:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:34 crc kubenswrapper[4619]: E0126 10:56:34.340483 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:34Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.346779 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.346824 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.346842 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.346867 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.346886 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:34Z","lastTransitionTime":"2026-01-26T10:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:34 crc kubenswrapper[4619]: E0126 10:56:34.367502 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:34Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.372965 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.373013 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.373023 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.373042 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.373054 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:34Z","lastTransitionTime":"2026-01-26T10:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:34 crc kubenswrapper[4619]: E0126 10:56:34.385878 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b26d7c31-8260-474d-b523-691101850253\\\",\\\"systemUUID\\\":\\\"6aae6ba9-96c1-4d99-8b9a-90adac40daa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:34Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:34 crc kubenswrapper[4619]: E0126 10:56:34.386049 4619 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.388524 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.388575 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.388593 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.388640 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.388658 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:34Z","lastTransitionTime":"2026-01-26T10:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.492677 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.492745 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.492759 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.492786 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.492803 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:34Z","lastTransitionTime":"2026-01-26T10:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.596376 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.596435 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.596446 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.596471 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.596483 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:34Z","lastTransitionTime":"2026-01-26T10:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.700062 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.700122 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.700131 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.700150 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.700161 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:34Z","lastTransitionTime":"2026-01-26T10:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.803508 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.803565 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.803573 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.803594 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.803609 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:34Z","lastTransitionTime":"2026-01-26T10:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.906695 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.906773 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.906791 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.906810 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:34 crc kubenswrapper[4619]: I0126 10:56:34.906822 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:34Z","lastTransitionTime":"2026-01-26T10:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.010900 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.010979 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.011000 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.011031 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.011048 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:35Z","lastTransitionTime":"2026-01-26T10:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.115054 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.115117 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.115134 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.115161 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.115291 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:35Z","lastTransitionTime":"2026-01-26T10:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.218031 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.218579 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.218762 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.218911 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.219068 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:35Z","lastTransitionTime":"2026-01-26T10:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.261092 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.261141 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.261880 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:35 crc kubenswrapper[4619]: E0126 10:56:35.262099 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:56:35 crc kubenswrapper[4619]: E0126 10:56:35.262271 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:56:35 crc kubenswrapper[4619]: E0126 10:56:35.262392 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.282301 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 15:17:43.718015511 +0000 UTC Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.322441 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.322503 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.322520 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.322551 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.322568 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:35Z","lastTransitionTime":"2026-01-26T10:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.426215 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.426283 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.426305 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.426364 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.426382 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:35Z","lastTransitionTime":"2026-01-26T10:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.533856 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.533925 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.533944 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.533975 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.533998 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:35Z","lastTransitionTime":"2026-01-26T10:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.637844 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.637921 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.637943 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.637976 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.637999 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:35Z","lastTransitionTime":"2026-01-26T10:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.742293 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.742369 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.742387 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.742417 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.742434 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:35Z","lastTransitionTime":"2026-01-26T10:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.845898 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.845940 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.845950 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.845969 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.845982 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:35Z","lastTransitionTime":"2026-01-26T10:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.948905 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.948984 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.949003 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.949032 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:35 crc kubenswrapper[4619]: I0126 10:56:35.949051 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:35Z","lastTransitionTime":"2026-01-26T10:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.052434 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.052479 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.052490 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.052508 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.052520 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:36Z","lastTransitionTime":"2026-01-26T10:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.156405 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.156453 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.156463 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.156487 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.156497 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:36Z","lastTransitionTime":"2026-01-26T10:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.259327 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.259369 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.259381 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.259400 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.259411 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:36Z","lastTransitionTime":"2026-01-26T10:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.260790 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:36 crc kubenswrapper[4619]: E0126 10:56:36.260942 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.283304 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 23:09:53.954037425 +0000 UTC Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.363108 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.363155 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.363164 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.363180 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.363192 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:36Z","lastTransitionTime":"2026-01-26T10:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.466260 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.466372 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.466386 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.466406 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.466418 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:36Z","lastTransitionTime":"2026-01-26T10:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.569490 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.569533 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.569547 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.569566 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.569578 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:36Z","lastTransitionTime":"2026-01-26T10:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.672903 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.672964 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.672981 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.673008 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.673026 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:36Z","lastTransitionTime":"2026-01-26T10:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.776878 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.777008 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.777033 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.777061 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.777081 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:36Z","lastTransitionTime":"2026-01-26T10:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.880488 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.880648 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.880674 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.880709 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.880732 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:36Z","lastTransitionTime":"2026-01-26T10:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.984391 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.984452 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.984465 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.984554 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:36 crc kubenswrapper[4619]: I0126 10:56:36.984567 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:36Z","lastTransitionTime":"2026-01-26T10:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.087737 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.087841 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.087862 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.087889 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.087912 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:37Z","lastTransitionTime":"2026-01-26T10:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.191133 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.191203 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.191221 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.191247 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.191266 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:37Z","lastTransitionTime":"2026-01-26T10:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.260540 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.260599 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.260699 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:56:37 crc kubenswrapper[4619]: E0126 10:56:37.260833 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:56:37 crc kubenswrapper[4619]: E0126 10:56:37.261005 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:56:37 crc kubenswrapper[4619]: E0126 10:56:37.261108 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.283472 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 06:19:05.72866513 +0000 UTC Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.294505 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.294642 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.294665 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.294695 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.294718 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:37Z","lastTransitionTime":"2026-01-26T10:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.398036 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.398128 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.398146 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.398178 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.398195 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:37Z","lastTransitionTime":"2026-01-26T10:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.501794 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.501874 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.501894 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.501922 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.501946 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:37Z","lastTransitionTime":"2026-01-26T10:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.605694 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.605765 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.605783 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.605809 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.605831 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:37Z","lastTransitionTime":"2026-01-26T10:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.709369 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.709432 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.709449 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.709474 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.709499 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:37Z","lastTransitionTime":"2026-01-26T10:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.812711 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.812781 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.812793 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.812811 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.812825 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:37Z","lastTransitionTime":"2026-01-26T10:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.915347 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.915406 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.915415 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.915436 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:37 crc kubenswrapper[4619]: I0126 10:56:37.915471 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:37Z","lastTransitionTime":"2026-01-26T10:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.018660 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.018724 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.018737 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.018761 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.018773 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:38Z","lastTransitionTime":"2026-01-26T10:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.122162 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.122259 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.122285 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.122314 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.122334 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:38Z","lastTransitionTime":"2026-01-26T10:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.225918 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.225987 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.226005 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.226033 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.226052 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:38Z","lastTransitionTime":"2026-01-26T10:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.260868 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:38 crc kubenswrapper[4619]: E0126 10:56:38.261113 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.284279 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 01:15:21.388333231 +0000 UTC Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.330420 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.330527 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.330552 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.330582 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.330601 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:38Z","lastTransitionTime":"2026-01-26T10:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.434029 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.434095 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.434110 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.434137 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.434157 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:38Z","lastTransitionTime":"2026-01-26T10:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.537655 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.537723 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.537740 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.537766 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.537785 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:38Z","lastTransitionTime":"2026-01-26T10:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.642859 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.642934 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.642956 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.642986 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.643015 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:38Z","lastTransitionTime":"2026-01-26T10:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.747007 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.747097 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.747122 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.747161 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.747233 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:38Z","lastTransitionTime":"2026-01-26T10:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.850924 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.850963 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.850975 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.850994 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.851006 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:38Z","lastTransitionTime":"2026-01-26T10:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.954492 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.954566 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.954586 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.954630 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:38 crc kubenswrapper[4619]: I0126 10:56:38.954649 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:38Z","lastTransitionTime":"2026-01-26T10:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.057686 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.057738 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.057748 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.057769 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.057780 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:39Z","lastTransitionTime":"2026-01-26T10:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.161555 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.161667 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.161689 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.161717 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.161737 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:39Z","lastTransitionTime":"2026-01-26T10:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.260748 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.260885 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.260905 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:56:39 crc kubenswrapper[4619]: E0126 10:56:39.261157 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:56:39 crc kubenswrapper[4619]: E0126 10:56:39.261355 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:56:39 crc kubenswrapper[4619]: E0126 10:56:39.261804 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.264430 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.264472 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.264481 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.264495 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.264509 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:39Z","lastTransitionTime":"2026-01-26T10:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.285126 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 17:16:17.421204537 +0000 UTC Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.367917 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.367977 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.367993 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.368019 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.368041 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:39Z","lastTransitionTime":"2026-01-26T10:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.471422 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.471475 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.471487 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.471505 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.471515 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:39Z","lastTransitionTime":"2026-01-26T10:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.574744 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.574804 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.574820 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.574845 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.574863 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:39Z","lastTransitionTime":"2026-01-26T10:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.678282 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.678354 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.678374 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.678404 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.678424 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:39Z","lastTransitionTime":"2026-01-26T10:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.782227 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.782307 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.782326 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.782361 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.782387 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:39Z","lastTransitionTime":"2026-01-26T10:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.886050 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.886122 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.886157 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.886198 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.886228 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:39Z","lastTransitionTime":"2026-01-26T10:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.990136 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.990220 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.990239 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.990269 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:39 crc kubenswrapper[4619]: I0126 10:56:39.990290 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:39Z","lastTransitionTime":"2026-01-26T10:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.094253 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.094316 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.094332 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.094358 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.094375 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:40Z","lastTransitionTime":"2026-01-26T10:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.197370 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.197427 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.197440 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.197457 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.197472 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:40Z","lastTransitionTime":"2026-01-26T10:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.261017 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:40 crc kubenswrapper[4619]: E0126 10:56:40.261300 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.285745 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 08:58:57.222877927 +0000 UTC Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.300697 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.300741 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.300752 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.300770 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.300783 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:40Z","lastTransitionTime":"2026-01-26T10:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.404197 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.404238 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.404247 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.404265 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.404275 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:40Z","lastTransitionTime":"2026-01-26T10:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.510528 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.510607 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.510667 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.510692 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.510706 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:40Z","lastTransitionTime":"2026-01-26T10:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.613390 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.613451 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.613465 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.613488 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.613502 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:40Z","lastTransitionTime":"2026-01-26T10:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.718133 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.718201 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.718218 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.718243 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.718259 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:40Z","lastTransitionTime":"2026-01-26T10:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.821117 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.821168 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.821182 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.821206 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.821219 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:40Z","lastTransitionTime":"2026-01-26T10:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.924310 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.924398 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.924408 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.924429 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:40 crc kubenswrapper[4619]: I0126 10:56:40.924445 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:40Z","lastTransitionTime":"2026-01-26T10:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.027865 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.027918 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.027929 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.027948 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.027961 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:41Z","lastTransitionTime":"2026-01-26T10:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.132157 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.132218 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.132231 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.132253 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.132266 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:41Z","lastTransitionTime":"2026-01-26T10:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.236328 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.236420 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.236445 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.236520 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.236549 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:41Z","lastTransitionTime":"2026-01-26T10:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.260999 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.261120 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.261004 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:56:41 crc kubenswrapper[4619]: E0126 10:56:41.261274 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:56:41 crc kubenswrapper[4619]: E0126 10:56:41.261389 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:56:41 crc kubenswrapper[4619]: E0126 10:56:41.261502 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.281703 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f31d0267-7c14-469f-aa35-c62a7e22e236\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684bfdf7352b2c2c2da47372847d8ad2da8f297db21df4a9ee95af1c911ed801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4d0fa82c1e0c7288072c19b175cc433e44b8ec49a1951b3286c032c350d9177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adac1c5d43727ca7872d61a7a205c3cffb45cd818d612abbe66d96158f8e16c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b89d58f8fe9ee1a688f79f658dac138818e547dcccdd952370b2de019f65cb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b89d58f8fe9ee1a688f79f658dac138818e547dcccdd952370b2de019f65cb7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.286006 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 07:56:18.636342434 +0000 UTC Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.300728 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adf87c2129de5c283a536b9c0f286f540e91d8e0181a80e4b93e86c93286e3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.313657 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v22hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5a1e1f-e05a-4fec-82df-3491fad4b710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fee88ded3b09b1703c045be402aa92da417f3ec4476d3f8d63e016162025fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhvz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v22hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.329732 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f33a41bb-6406-4c73-8024-4acd72817832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4f72120cb39acedeeead5975b3818ab59b1d9ef97edac46a4d0c695fb47abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk9lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28hd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.339315 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.339368 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.339384 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.339410 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.339425 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:41Z","lastTransitionTime":"2026-01-26T10:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.360699 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed93d0d-0709-4425-b378-6b8a15318070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:56:19Z\\\",\\\"message\\\":\\\"ng{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}\\\\nI0126 10:56:19.139475 6549 services_controller.go:360] Finished syncing service package-server-manager-metrics on namespace openshift-operator-lifecycle-manager for network=default : 15.769045ms\\\\nI0126 10:56:19.139484 6549 services_controller.go:360] Finished syncing service metrics on namespace openshift-config-operator for network=default : 18.136497ms\\\\nI0126 10:56:19.139477 6549 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nI0126 10:56:19.139437 6549 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler-operator/metrics\\\\\\\"}\\\\nI0126 10:56:19.139502 6549 services_controller.go:360] Finished syncing service networking-console-plugin on namespace openshift-network-console for network=default : 11.757579ms\\\\nI0126 10:56:19.139515 6549 services_controller.go:360] Finished syncing service metrics on namespace openshift-kube-scheduler-operator for network=default : 13.205347ms\\\\nI0126 10:56:19.139854 6549 ovnkube.go:599] Stopped ovnkube\\\\nI0126 10:56:19.139886 6549 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 10:56:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:56:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b6xtv_openshift-ovn-kubernetes(9ed93d0d-0709-4425-b378-6b8a15318070)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kls7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6xtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.377562 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bs2t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4ef536-778e-47e5-afb2-539e96eba778\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-44sfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bs2t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.397139 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f41b65e-88fb-45c3-a959-984e44525720\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"file observer\\\\nW0126 10:55:18.933962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 10:55:18.934147 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 10:55:18.935958 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-741477915/tls.crt::/tmp/serving-cert-741477915/tls.key\\\\\\\"\\\\nI0126 10:55:19.251576 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 10:55:19.254134 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 10:55:19.254152 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 10:55:19.254171 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 10:55:19.254176 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 10:55:19.259214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 10:55:19.259226 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 10:55:19.259243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 10:55:19.259254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 10:55:19.259257 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 10:55:19.259262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 10:55:19.259265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 10:55:19.262265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.417675 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4219d2e6-45d0-4591-a8be-d0a79aad2a7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47326ed107e580f0ebb47b0b04ef74575b6a46a772ab7d5402ffd0eaa4c64b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6704edaf3297a18a1321bcb84ccf59ad0035459090b75e3768fffa7458a7c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dac76c8b25fd158211789faab6c898c533269bcfa9be941a3248733d64a2b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.435021 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.442093 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.442171 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.442188 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.442210 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.442251 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:41Z","lastTransitionTime":"2026-01-26T10:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.457102 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3c5f7d0-80be-4cd1-8700-edae2eb1a04a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040ecfd813bfe1593da976b353abbf4b1e184e4bec225208352164785ed0d685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32271b4021ac8641ad8a72cad748c1f960bc34913f86425354c1dfdc7baec2ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98b2f70033a4f9efc6b05dd940c58a621d43f9324193e65dd234e36e77b61cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9fdc35f543853b657d64cbe3d4eca2d514757d3f7a9658047f1a61e0f29105\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a886c2945e46718e8df2316f05c5b3501a5628eb4e65f30a4c1e3d033e0cc8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64458de114c575bf998d3cacc39cfd9e969ea18ca2b877106f7d5177a41d6e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0ec8ec8ca04d152867ddf39415b4780f16a21cea17368d2621541baa41974b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9fvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5j9c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.493351 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7def49c6-e144-42b7-8f36-0625f1d34565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec7e9ee0bab7dd1ad1b12a3a8ad86ae68690e864d79c8804d8a3ad55cc63cd03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136e39d9b7d5eff7086e9983574a1c22186a06d9c4d2b5d566d74202749f487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca5e9ec76892ec00b79ecafda52f33afe0fcfe3b40bceaa76e586d95a62d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6071a43e179915476d4051e159c82a42544006d424aa36a5a81ef4efea75823b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f0b03f072d78d4ff1bac8a10e9522f3b9af4121b08380e95a58b18e64fade8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a98a2217c238f895c59c00102801bd7233787f028671e513b8c21927ae8afa6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a98a2217c238f895c59c00102801bd7233787f028671e513b8c21927ae8afa6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469cc8b055a2e99577efc86d33f62f22c6390d4947b75f779b26f2a63875af68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469cc8b055a2e99577efc86d33f62f22c6390d4947b75f779b26f2a63875af68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://41190c5262d846fb6eb8ca1f8eb63f62a081076ba648214f4584878190352d56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41190c5262d846fb6eb8ca1f8eb63f62a081076ba648214f4584878190352d56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.511999 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.533646 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa13aefb41209d7de99b5c4723624e1f3d999e9ef8ff1db819cdf34b1292916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c679eeefbc3d43b6c38b3bf0a6caf32db9680567f5796fb8422ec71e5e9373c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.545503 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.545548 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.545563 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.545586 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.545603 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:41Z","lastTransitionTime":"2026-01-26T10:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.554049 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-684hz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aab93f8-6555-4389-b15c-9af458caa339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd2e081103d0219f3feb25e53258b23eb64cafeb27bf5b0c0c62ac1f92015406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T10:56:12Z\\\",\\\"message\\\":\\\"2026-01-26T10:55:27+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_31827aa7-227d-40e4-8d24-0c50fdb78eea\\\\n2026-01-26T10:55:27+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_31827aa7-227d-40e4-8d24-0c50fdb78eea to /host/opt/cni/bin/\\\\n2026-01-26T10:55:27Z [verbose] multus-daemon started\\\\n2026-01-26T10:55:27Z [verbose] Readiness Indicator file check\\\\n2026-01-26T10:56:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvrcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-684hz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.569987 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8da9fdd-1a7d-4adb-80ba-3bcaef1892ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b899b71eba7b32e4ac82dc4f861658da5dd6fad9b21cdd49df50c6687cfcc90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://450dddfc293d70b41641eda8ca7227b7f19bc8b253c718744224cccdf97a1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://450dddfc293d70b41641eda8ca7227b7f19bc8b253c718744224cccdf97a1c98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T10:55:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T10:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.587431 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07adfeea352c74cb910a882e8594d2912f2d7e00696170e606711ef42d7a94b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.603365 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fzj46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b491a22b-b179-42a8-bebd-4dfc7ae4cb71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5c6c5d725e8d061aef32a1c9360dfb0e0ffd766b9348b5ef1b4c114995ac9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjncm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fzj46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.621144 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d1ba0a5-54cd-4f55-b3c9-cdd5c75e26df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b771a4b98ddb7b088189501e68a744900bc39e69b33ff54e6bbe326218bf25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbb52b66491323889833d0a5db94cf686a9edb6629b5fb0dda213ffef3c8f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T10:55:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2nnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T10:55:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m6m7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.641272 4619 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T10:55:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T10:56:41Z is after 2025-08-24T17:21:41Z" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.648157 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.648221 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.648239 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.648268 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.648285 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:41Z","lastTransitionTime":"2026-01-26T10:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.751466 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.751529 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.751548 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.751576 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.751595 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:41Z","lastTransitionTime":"2026-01-26T10:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.855202 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.855270 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.855294 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.855383 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.855411 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:41Z","lastTransitionTime":"2026-01-26T10:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.958386 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.958436 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.958450 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.958472 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:41 crc kubenswrapper[4619]: I0126 10:56:41.958485 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:41Z","lastTransitionTime":"2026-01-26T10:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.061254 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.061351 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.061362 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.061377 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.061386 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:42Z","lastTransitionTime":"2026-01-26T10:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.164053 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.164119 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.164134 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.164158 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.164172 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:42Z","lastTransitionTime":"2026-01-26T10:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.260242 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:42 crc kubenswrapper[4619]: E0126 10:56:42.260786 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.268022 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.268293 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.268496 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.268749 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.268966 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:42Z","lastTransitionTime":"2026-01-26T10:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.286240 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 20:42:03.401991263 +0000 UTC Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.374017 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.374103 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.374118 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.374143 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.374166 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:42Z","lastTransitionTime":"2026-01-26T10:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.477917 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.478012 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.478045 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.478080 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.478108 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:42Z","lastTransitionTime":"2026-01-26T10:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.558175 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a4ef536-778e-47e5-afb2-539e96eba778-metrics-certs\") pod \"network-metrics-daemon-bs2t7\" (UID: \"6a4ef536-778e-47e5-afb2-539e96eba778\") " pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:42 crc kubenswrapper[4619]: E0126 10:56:42.558949 4619 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 10:56:42 crc kubenswrapper[4619]: E0126 10:56:42.559211 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a4ef536-778e-47e5-afb2-539e96eba778-metrics-certs podName:6a4ef536-778e-47e5-afb2-539e96eba778 nodeName:}" failed. No retries permitted until 2026-01-26 10:57:46.559183102 +0000 UTC m=+165.593223848 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a4ef536-778e-47e5-afb2-539e96eba778-metrics-certs") pod "network-metrics-daemon-bs2t7" (UID: "6a4ef536-778e-47e5-afb2-539e96eba778") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.580186 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.580221 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.580229 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.580246 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.580255 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:42Z","lastTransitionTime":"2026-01-26T10:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.683406 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.683455 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.683466 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.683490 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.683503 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:42Z","lastTransitionTime":"2026-01-26T10:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.787255 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.787300 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.787309 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.787327 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.787340 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:42Z","lastTransitionTime":"2026-01-26T10:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.890398 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.890460 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.890470 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.890490 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.890502 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:42Z","lastTransitionTime":"2026-01-26T10:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.993939 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.994012 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.994023 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.994045 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:42 crc kubenswrapper[4619]: I0126 10:56:42.994058 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:42Z","lastTransitionTime":"2026-01-26T10:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.096859 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.096904 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.096923 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.096943 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.096955 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:43Z","lastTransitionTime":"2026-01-26T10:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.199456 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.199500 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.199510 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.199528 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.199537 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:43Z","lastTransitionTime":"2026-01-26T10:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.261153 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.261201 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.261273 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:56:43 crc kubenswrapper[4619]: E0126 10:56:43.261369 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:56:43 crc kubenswrapper[4619]: E0126 10:56:43.261827 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:56:43 crc kubenswrapper[4619]: E0126 10:56:43.262208 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.286691 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 18:52:51.857049822 +0000 UTC Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.306693 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.306772 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.306783 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.306802 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.306815 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:43Z","lastTransitionTime":"2026-01-26T10:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.409411 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.409490 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.409510 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.409540 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.409558 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:43Z","lastTransitionTime":"2026-01-26T10:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.512428 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.512480 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.512494 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.512515 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.512527 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:43Z","lastTransitionTime":"2026-01-26T10:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.616653 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.616721 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.616733 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.616751 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.616783 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:43Z","lastTransitionTime":"2026-01-26T10:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.719727 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.719784 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.719796 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.719819 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.719832 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:43Z","lastTransitionTime":"2026-01-26T10:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.823138 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.823203 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.823222 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.823248 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.823268 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:43Z","lastTransitionTime":"2026-01-26T10:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.927691 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.927746 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.927763 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.927787 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:43 crc kubenswrapper[4619]: I0126 10:56:43.927804 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:43Z","lastTransitionTime":"2026-01-26T10:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.031216 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.031298 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.031320 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.031348 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.031369 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:44Z","lastTransitionTime":"2026-01-26T10:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.133899 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.133949 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.133962 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.133988 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.134029 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:44Z","lastTransitionTime":"2026-01-26T10:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.237276 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.237349 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.237363 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.237385 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.237398 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:44Z","lastTransitionTime":"2026-01-26T10:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.260686 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:44 crc kubenswrapper[4619]: E0126 10:56:44.260847 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.287328 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 11:10:15.202586673 +0000 UTC Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.340232 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.340290 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.340300 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.340320 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.340330 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:44Z","lastTransitionTime":"2026-01-26T10:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.444949 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.445030 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.445053 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.445081 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.445100 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:44Z","lastTransitionTime":"2026-01-26T10:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.548298 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.548344 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.548353 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.548369 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.548379 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:44Z","lastTransitionTime":"2026-01-26T10:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.613400 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.613448 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.613459 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.613476 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.613486 4619 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T10:56:44Z","lastTransitionTime":"2026-01-26T10:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.668871 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-qg8lx"] Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.669359 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qg8lx" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.672200 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.672694 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.674517 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.676749 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.692192 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=85.692168239 podStartE2EDuration="1m25.692168239s" podCreationTimestamp="2026-01-26 10:55:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:56:44.691550223 +0000 UTC m=+103.725590949" watchObservedRunningTime="2026-01-26 10:56:44.692168239 +0000 UTC m=+103.726208955" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.713506 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=54.713475949 podStartE2EDuration="54.713475949s" podCreationTimestamp="2026-01-26 10:55:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:56:44.712875034 +0000 UTC m=+103.746915750" watchObservedRunningTime="2026-01-26 10:56:44.713475949 +0000 UTC m=+103.747516665" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.746071 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-v22hs" podStartSLOduration=81.746050396 podStartE2EDuration="1m21.746050396s" podCreationTimestamp="2026-01-26 10:55:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:56:44.745637266 +0000 UTC m=+103.779677982" watchObservedRunningTime="2026-01-26 10:56:44.746050396 +0000 UTC m=+103.780091112" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.759827 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podStartSLOduration=81.759789498 podStartE2EDuration="1m21.759789498s" podCreationTimestamp="2026-01-26 10:55:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:56:44.758887635 +0000 UTC m=+103.792928351" watchObservedRunningTime="2026-01-26 10:56:44.759789498 +0000 UTC m=+103.793830234" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.780461 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb174839-69ef-4c73-a1f1-5051fa3e9355-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qg8lx\" (UID: \"cb174839-69ef-4c73-a1f1-5051fa3e9355\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qg8lx" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.780518 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb174839-69ef-4c73-a1f1-5051fa3e9355-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qg8lx\" (UID: \"cb174839-69ef-4c73-a1f1-5051fa3e9355\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qg8lx" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.780568 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cb174839-69ef-4c73-a1f1-5051fa3e9355-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qg8lx\" (UID: \"cb174839-69ef-4c73-a1f1-5051fa3e9355\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qg8lx" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.780845 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/cb174839-69ef-4c73-a1f1-5051fa3e9355-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qg8lx\" (UID: \"cb174839-69ef-4c73-a1f1-5051fa3e9355\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qg8lx" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.781013 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/cb174839-69ef-4c73-a1f1-5051fa3e9355-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qg8lx\" (UID: \"cb174839-69ef-4c73-a1f1-5051fa3e9355\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qg8lx" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.826150 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=20.826123765 podStartE2EDuration="20.826123765s" podCreationTimestamp="2026-01-26 10:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:56:44.825727043 +0000 UTC m=+103.859767799" watchObservedRunningTime="2026-01-26 10:56:44.826123765 +0000 UTC m=+103.860164491" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.844992 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=82.84497232 podStartE2EDuration="1m22.84497232s" podCreationTimestamp="2026-01-26 10:55:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:56:44.844741734 +0000 UTC m=+103.878782460" watchObservedRunningTime="2026-01-26 10:56:44.84497232 +0000 UTC m=+103.879013036" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.881773 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cb174839-69ef-4c73-a1f1-5051fa3e9355-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qg8lx\" (UID: \"cb174839-69ef-4c73-a1f1-5051fa3e9355\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qg8lx" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.881829 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/cb174839-69ef-4c73-a1f1-5051fa3e9355-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qg8lx\" (UID: \"cb174839-69ef-4c73-a1f1-5051fa3e9355\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qg8lx" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.881863 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/cb174839-69ef-4c73-a1f1-5051fa3e9355-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qg8lx\" (UID: \"cb174839-69ef-4c73-a1f1-5051fa3e9355\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qg8lx" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.881946 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb174839-69ef-4c73-a1f1-5051fa3e9355-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qg8lx\" (UID: \"cb174839-69ef-4c73-a1f1-5051fa3e9355\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qg8lx" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.881951 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/cb174839-69ef-4c73-a1f1-5051fa3e9355-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qg8lx\" (UID: \"cb174839-69ef-4c73-a1f1-5051fa3e9355\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qg8lx" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.881972 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb174839-69ef-4c73-a1f1-5051fa3e9355-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qg8lx\" (UID: \"cb174839-69ef-4c73-a1f1-5051fa3e9355\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qg8lx" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.882021 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/cb174839-69ef-4c73-a1f1-5051fa3e9355-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qg8lx\" (UID: \"cb174839-69ef-4c73-a1f1-5051fa3e9355\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qg8lx" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.882942 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cb174839-69ef-4c73-a1f1-5051fa3e9355-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qg8lx\" (UID: \"cb174839-69ef-4c73-a1f1-5051fa3e9355\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qg8lx" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.883222 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-5j9c8" podStartSLOduration=80.883206776 podStartE2EDuration="1m20.883206776s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:56:44.881654465 +0000 UTC m=+103.915695201" watchObservedRunningTime="2026-01-26 10:56:44.883206776 +0000 UTC m=+103.917247512" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.888815 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb174839-69ef-4c73-a1f1-5051fa3e9355-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qg8lx\" (UID: \"cb174839-69ef-4c73-a1f1-5051fa3e9355\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qg8lx" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.897996 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=32.897965055 podStartE2EDuration="32.897965055s" podCreationTimestamp="2026-01-26 10:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:56:44.89588071 +0000 UTC m=+103.929921436" watchObservedRunningTime="2026-01-26 10:56:44.897965055 +0000 UTC m=+103.932005771" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.904428 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb174839-69ef-4c73-a1f1-5051fa3e9355-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qg8lx\" (UID: \"cb174839-69ef-4c73-a1f1-5051fa3e9355\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qg8lx" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.963279 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-684hz" podStartSLOduration=80.963255554 podStartE2EDuration="1m20.963255554s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:56:44.948960427 +0000 UTC m=+103.983001143" watchObservedRunningTime="2026-01-26 10:56:44.963255554 +0000 UTC m=+103.997296270" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.986848 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qg8lx" Jan 26 10:56:44 crc kubenswrapper[4619]: I0126 10:56:44.988884 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-fzj46" podStartSLOduration=81.988860987 podStartE2EDuration="1m21.988860987s" podCreationTimestamp="2026-01-26 10:55:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:56:44.987196134 +0000 UTC m=+104.021236850" watchObservedRunningTime="2026-01-26 10:56:44.988860987 +0000 UTC m=+104.022901703" Jan 26 10:56:45 crc kubenswrapper[4619]: W0126 10:56:45.005755 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb174839_69ef_4c73_a1f1_5051fa3e9355.slice/crio-e60fb766c2e7f5db6b9426a4f068f313ec4752c5207f98554cc009f4e753c81a WatchSource:0}: Error finding container e60fb766c2e7f5db6b9426a4f068f313ec4752c5207f98554cc009f4e753c81a: Status 404 returned error can't find the container with id e60fb766c2e7f5db6b9426a4f068f313ec4752c5207f98554cc009f4e753c81a Jan 26 10:56:45 crc kubenswrapper[4619]: I0126 10:56:45.023010 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m6m7q" podStartSLOduration=81.022985125 podStartE2EDuration="1m21.022985125s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:56:45.022748739 +0000 UTC m=+104.056789455" watchObservedRunningTime="2026-01-26 10:56:45.022985125 +0000 UTC m=+104.057025841" Jan 26 10:56:45 crc kubenswrapper[4619]: I0126 10:56:45.260415 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:56:45 crc kubenswrapper[4619]: I0126 10:56:45.260452 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:56:45 crc kubenswrapper[4619]: E0126 10:56:45.262444 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:56:45 crc kubenswrapper[4619]: I0126 10:56:45.260776 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:45 crc kubenswrapper[4619]: E0126 10:56:45.262579 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:56:45 crc kubenswrapper[4619]: E0126 10:56:45.262904 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:56:45 crc kubenswrapper[4619]: I0126 10:56:45.287803 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 11:22:36.731189094 +0000 UTC Jan 26 10:56:45 crc kubenswrapper[4619]: I0126 10:56:45.287893 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 26 10:56:45 crc kubenswrapper[4619]: I0126 10:56:45.297023 4619 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 10:56:45 crc kubenswrapper[4619]: I0126 10:56:45.901668 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qg8lx" event={"ID":"cb174839-69ef-4c73-a1f1-5051fa3e9355","Type":"ContainerStarted","Data":"6d14afc49ad3517c5be239cd3713e9a85b728d60a3ea8ba8b640e3fe844a9eed"} Jan 26 10:56:45 crc kubenswrapper[4619]: I0126 10:56:45.901762 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qg8lx" event={"ID":"cb174839-69ef-4c73-a1f1-5051fa3e9355","Type":"ContainerStarted","Data":"e60fb766c2e7f5db6b9426a4f068f313ec4752c5207f98554cc009f4e753c81a"} Jan 26 10:56:45 crc kubenswrapper[4619]: I0126 10:56:45.920575 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qg8lx" podStartSLOduration=81.920550338 podStartE2EDuration="1m21.920550338s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:56:45.919743728 +0000 UTC m=+104.953784484" watchObservedRunningTime="2026-01-26 10:56:45.920550338 +0000 UTC m=+104.954591094" Jan 26 10:56:46 crc kubenswrapper[4619]: I0126 10:56:46.261109 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:46 crc kubenswrapper[4619]: E0126 10:56:46.261288 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:56:46 crc kubenswrapper[4619]: I0126 10:56:46.262101 4619 scope.go:117] "RemoveContainer" containerID="7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea" Jan 26 10:56:46 crc kubenswrapper[4619]: E0126 10:56:46.262336 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b6xtv_openshift-ovn-kubernetes(9ed93d0d-0709-4425-b378-6b8a15318070)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" Jan 26 10:56:47 crc kubenswrapper[4619]: I0126 10:56:47.261090 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:47 crc kubenswrapper[4619]: E0126 10:56:47.261361 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:56:47 crc kubenswrapper[4619]: I0126 10:56:47.261779 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:56:47 crc kubenswrapper[4619]: E0126 10:56:47.261927 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:56:47 crc kubenswrapper[4619]: I0126 10:56:47.262356 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:56:47 crc kubenswrapper[4619]: E0126 10:56:47.262513 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:56:48 crc kubenswrapper[4619]: I0126 10:56:48.260794 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:48 crc kubenswrapper[4619]: E0126 10:56:48.261262 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:56:49 crc kubenswrapper[4619]: I0126 10:56:49.260866 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:56:49 crc kubenswrapper[4619]: I0126 10:56:49.260870 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:56:49 crc kubenswrapper[4619]: I0126 10:56:49.260972 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:49 crc kubenswrapper[4619]: E0126 10:56:49.261711 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:56:49 crc kubenswrapper[4619]: E0126 10:56:49.261894 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:56:49 crc kubenswrapper[4619]: E0126 10:56:49.262091 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:56:50 crc kubenswrapper[4619]: I0126 10:56:50.260215 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:50 crc kubenswrapper[4619]: E0126 10:56:50.260762 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:56:51 crc kubenswrapper[4619]: I0126 10:56:51.260672 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:56:51 crc kubenswrapper[4619]: I0126 10:56:51.260731 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:56:51 crc kubenswrapper[4619]: I0126 10:56:51.264762 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:51 crc kubenswrapper[4619]: E0126 10:56:51.265099 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:56:51 crc kubenswrapper[4619]: E0126 10:56:51.266003 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:56:51 crc kubenswrapper[4619]: E0126 10:56:51.266737 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:56:52 crc kubenswrapper[4619]: I0126 10:56:52.261064 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:52 crc kubenswrapper[4619]: E0126 10:56:52.263275 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:56:53 crc kubenswrapper[4619]: I0126 10:56:53.260863 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:56:53 crc kubenswrapper[4619]: I0126 10:56:53.260994 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:56:53 crc kubenswrapper[4619]: I0126 10:56:53.261081 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:53 crc kubenswrapper[4619]: E0126 10:56:53.261097 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:56:53 crc kubenswrapper[4619]: E0126 10:56:53.261264 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:56:53 crc kubenswrapper[4619]: E0126 10:56:53.261397 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:56:54 crc kubenswrapper[4619]: I0126 10:56:54.260765 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:54 crc kubenswrapper[4619]: E0126 10:56:54.261014 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:56:55 crc kubenswrapper[4619]: I0126 10:56:55.261041 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:55 crc kubenswrapper[4619]: I0126 10:56:55.261042 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:56:55 crc kubenswrapper[4619]: I0126 10:56:55.261060 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:56:55 crc kubenswrapper[4619]: E0126 10:56:55.261817 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:56:55 crc kubenswrapper[4619]: E0126 10:56:55.261953 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:56:55 crc kubenswrapper[4619]: E0126 10:56:55.262116 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:56:56 crc kubenswrapper[4619]: I0126 10:56:56.260350 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:56 crc kubenswrapper[4619]: E0126 10:56:56.260557 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:56:57 crc kubenswrapper[4619]: I0126 10:56:57.260452 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:56:57 crc kubenswrapper[4619]: I0126 10:56:57.260902 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:57 crc kubenswrapper[4619]: E0126 10:56:57.261707 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:56:57 crc kubenswrapper[4619]: I0126 10:56:57.262189 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:56:57 crc kubenswrapper[4619]: I0126 10:56:57.262283 4619 scope.go:117] "RemoveContainer" containerID="7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea" Jan 26 10:56:57 crc kubenswrapper[4619]: E0126 10:56:57.260815 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:56:57 crc kubenswrapper[4619]: E0126 10:56:57.263111 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b6xtv_openshift-ovn-kubernetes(9ed93d0d-0709-4425-b378-6b8a15318070)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" Jan 26 10:56:57 crc kubenswrapper[4619]: E0126 10:56:57.263343 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:56:58 crc kubenswrapper[4619]: I0126 10:56:58.260272 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:56:58 crc kubenswrapper[4619]: E0126 10:56:58.260469 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:56:59 crc kubenswrapper[4619]: I0126 10:56:59.281390 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:56:59 crc kubenswrapper[4619]: E0126 10:56:59.283662 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:56:59 crc kubenswrapper[4619]: I0126 10:56:59.281745 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:56:59 crc kubenswrapper[4619]: E0126 10:56:59.287086 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:56:59 crc kubenswrapper[4619]: I0126 10:56:59.281717 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:56:59 crc kubenswrapper[4619]: E0126 10:56:59.287542 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:56:59 crc kubenswrapper[4619]: I0126 10:56:59.954636 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-684hz_8aab93f8-6555-4389-b15c-9af458caa339/kube-multus/1.log" Jan 26 10:56:59 crc kubenswrapper[4619]: I0126 10:56:59.955546 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-684hz_8aab93f8-6555-4389-b15c-9af458caa339/kube-multus/0.log" Jan 26 10:56:59 crc kubenswrapper[4619]: I0126 10:56:59.955734 4619 generic.go:334] "Generic (PLEG): container finished" podID="8aab93f8-6555-4389-b15c-9af458caa339" containerID="bd2e081103d0219f3feb25e53258b23eb64cafeb27bf5b0c0c62ac1f92015406" exitCode=1 Jan 26 10:56:59 crc kubenswrapper[4619]: I0126 10:56:59.955798 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-684hz" event={"ID":"8aab93f8-6555-4389-b15c-9af458caa339","Type":"ContainerDied","Data":"bd2e081103d0219f3feb25e53258b23eb64cafeb27bf5b0c0c62ac1f92015406"} Jan 26 10:56:59 crc kubenswrapper[4619]: I0126 10:56:59.955988 4619 scope.go:117] "RemoveContainer" containerID="31c93db5b1087896c16de5be574ecebf1beb1cf3bc00744f239a074ca96c3d05" Jan 26 10:56:59 crc kubenswrapper[4619]: I0126 10:56:59.956480 4619 scope.go:117] "RemoveContainer" containerID="bd2e081103d0219f3feb25e53258b23eb64cafeb27bf5b0c0c62ac1f92015406" Jan 26 10:56:59 crc kubenswrapper[4619]: E0126 10:56:59.956683 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-684hz_openshift-multus(8aab93f8-6555-4389-b15c-9af458caa339)\"" pod="openshift-multus/multus-684hz" podUID="8aab93f8-6555-4389-b15c-9af458caa339" Jan 26 10:57:00 crc kubenswrapper[4619]: I0126 10:57:00.261176 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:57:00 crc kubenswrapper[4619]: E0126 10:57:00.261428 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:57:00 crc kubenswrapper[4619]: I0126 10:57:00.962702 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-684hz_8aab93f8-6555-4389-b15c-9af458caa339/kube-multus/1.log" Jan 26 10:57:01 crc kubenswrapper[4619]: E0126 10:57:01.238908 4619 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 26 10:57:01 crc kubenswrapper[4619]: I0126 10:57:01.260645 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:57:01 crc kubenswrapper[4619]: I0126 10:57:01.260856 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:57:01 crc kubenswrapper[4619]: E0126 10:57:01.261870 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:57:01 crc kubenswrapper[4619]: I0126 10:57:01.261938 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:57:01 crc kubenswrapper[4619]: E0126 10:57:01.262107 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:57:01 crc kubenswrapper[4619]: E0126 10:57:01.262449 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:57:01 crc kubenswrapper[4619]: E0126 10:57:01.380723 4619 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 10:57:02 crc kubenswrapper[4619]: I0126 10:57:02.261088 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:57:02 crc kubenswrapper[4619]: E0126 10:57:02.261236 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:57:03 crc kubenswrapper[4619]: I0126 10:57:03.260726 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:57:03 crc kubenswrapper[4619]: I0126 10:57:03.260820 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:57:03 crc kubenswrapper[4619]: E0126 10:57:03.260976 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:57:03 crc kubenswrapper[4619]: I0126 10:57:03.260726 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:57:03 crc kubenswrapper[4619]: E0126 10:57:03.261190 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:57:03 crc kubenswrapper[4619]: E0126 10:57:03.261328 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:57:04 crc kubenswrapper[4619]: I0126 10:57:04.260653 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:57:04 crc kubenswrapper[4619]: E0126 10:57:04.260848 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:57:05 crc kubenswrapper[4619]: I0126 10:57:05.260397 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:57:05 crc kubenswrapper[4619]: E0126 10:57:05.260561 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:57:05 crc kubenswrapper[4619]: I0126 10:57:05.260800 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:57:05 crc kubenswrapper[4619]: I0126 10:57:05.260832 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:57:05 crc kubenswrapper[4619]: E0126 10:57:05.260958 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:57:05 crc kubenswrapper[4619]: E0126 10:57:05.261033 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:57:06 crc kubenswrapper[4619]: I0126 10:57:06.260369 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:57:06 crc kubenswrapper[4619]: E0126 10:57:06.260569 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:57:06 crc kubenswrapper[4619]: E0126 10:57:06.382504 4619 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 10:57:07 crc kubenswrapper[4619]: I0126 10:57:07.260686 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:57:07 crc kubenswrapper[4619]: I0126 10:57:07.260740 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:57:07 crc kubenswrapper[4619]: I0126 10:57:07.260686 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:57:07 crc kubenswrapper[4619]: E0126 10:57:07.260862 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:57:07 crc kubenswrapper[4619]: E0126 10:57:07.260917 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:57:07 crc kubenswrapper[4619]: E0126 10:57:07.260991 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:57:08 crc kubenswrapper[4619]: I0126 10:57:08.261016 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:57:08 crc kubenswrapper[4619]: E0126 10:57:08.262137 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:57:08 crc kubenswrapper[4619]: I0126 10:57:08.262963 4619 scope.go:117] "RemoveContainer" containerID="7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea" Jan 26 10:57:08 crc kubenswrapper[4619]: I0126 10:57:08.997420 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6xtv_9ed93d0d-0709-4425-b378-6b8a15318070/ovnkube-controller/3.log" Jan 26 10:57:08 crc kubenswrapper[4619]: I0126 10:57:08.999933 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" event={"ID":"9ed93d0d-0709-4425-b378-6b8a15318070","Type":"ContainerStarted","Data":"f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c"} Jan 26 10:57:09 crc kubenswrapper[4619]: I0126 10:57:09.001113 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:57:09 crc kubenswrapper[4619]: I0126 10:57:09.183146 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" podStartSLOduration=105.183115036 podStartE2EDuration="1m45.183115036s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:09.030265224 +0000 UTC m=+128.064305940" watchObservedRunningTime="2026-01-26 10:57:09.183115036 +0000 UTC m=+128.217155762" Jan 26 10:57:09 crc kubenswrapper[4619]: I0126 10:57:09.184707 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bs2t7"] Jan 26 10:57:09 crc kubenswrapper[4619]: I0126 10:57:09.184820 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:57:09 crc kubenswrapper[4619]: E0126 10:57:09.184935 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:57:09 crc kubenswrapper[4619]: I0126 10:57:09.262725 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:57:09 crc kubenswrapper[4619]: E0126 10:57:09.262867 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:57:09 crc kubenswrapper[4619]: I0126 10:57:09.262939 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:57:09 crc kubenswrapper[4619]: E0126 10:57:09.262981 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:57:09 crc kubenswrapper[4619]: I0126 10:57:09.263021 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:57:09 crc kubenswrapper[4619]: E0126 10:57:09.263067 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:57:11 crc kubenswrapper[4619]: I0126 10:57:11.260543 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:57:11 crc kubenswrapper[4619]: I0126 10:57:11.260709 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:57:11 crc kubenswrapper[4619]: I0126 10:57:11.262241 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:57:11 crc kubenswrapper[4619]: E0126 10:57:11.262225 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:57:11 crc kubenswrapper[4619]: I0126 10:57:11.262377 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:57:11 crc kubenswrapper[4619]: E0126 10:57:11.262428 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:57:11 crc kubenswrapper[4619]: E0126 10:57:11.262556 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:57:11 crc kubenswrapper[4619]: E0126 10:57:11.262677 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:57:11 crc kubenswrapper[4619]: E0126 10:57:11.383673 4619 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 10:57:13 crc kubenswrapper[4619]: I0126 10:57:13.265174 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:57:13 crc kubenswrapper[4619]: E0126 10:57:13.265390 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:57:13 crc kubenswrapper[4619]: I0126 10:57:13.265753 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:57:13 crc kubenswrapper[4619]: E0126 10:57:13.265945 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:57:13 crc kubenswrapper[4619]: I0126 10:57:13.265974 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:57:13 crc kubenswrapper[4619]: E0126 10:57:13.266080 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:57:13 crc kubenswrapper[4619]: I0126 10:57:13.266380 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:57:13 crc kubenswrapper[4619]: E0126 10:57:13.266482 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:57:15 crc kubenswrapper[4619]: I0126 10:57:15.260730 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:57:15 crc kubenswrapper[4619]: I0126 10:57:15.260770 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:57:15 crc kubenswrapper[4619]: I0126 10:57:15.260891 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:57:15 crc kubenswrapper[4619]: I0126 10:57:15.260860 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:57:15 crc kubenswrapper[4619]: E0126 10:57:15.260993 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:57:15 crc kubenswrapper[4619]: E0126 10:57:15.261121 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:57:15 crc kubenswrapper[4619]: E0126 10:57:15.261549 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:57:15 crc kubenswrapper[4619]: I0126 10:57:15.261728 4619 scope.go:117] "RemoveContainer" containerID="bd2e081103d0219f3feb25e53258b23eb64cafeb27bf5b0c0c62ac1f92015406" Jan 26 10:57:15 crc kubenswrapper[4619]: E0126 10:57:15.261727 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:57:16 crc kubenswrapper[4619]: I0126 10:57:16.036840 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-684hz_8aab93f8-6555-4389-b15c-9af458caa339/kube-multus/1.log" Jan 26 10:57:16 crc kubenswrapper[4619]: I0126 10:57:16.037276 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-684hz" event={"ID":"8aab93f8-6555-4389-b15c-9af458caa339","Type":"ContainerStarted","Data":"9de21328a2d0a384fe90d901b86088a21223b1f1e6b5e9ddd903bef2fc5637db"} Jan 26 10:57:16 crc kubenswrapper[4619]: E0126 10:57:16.385085 4619 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 10:57:17 crc kubenswrapper[4619]: I0126 10:57:17.261208 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:57:17 crc kubenswrapper[4619]: I0126 10:57:17.261268 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:57:17 crc kubenswrapper[4619]: I0126 10:57:17.261204 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:57:17 crc kubenswrapper[4619]: E0126 10:57:17.261422 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:57:17 crc kubenswrapper[4619]: E0126 10:57:17.261570 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:57:17 crc kubenswrapper[4619]: I0126 10:57:17.261378 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:57:17 crc kubenswrapper[4619]: E0126 10:57:17.261770 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:57:17 crc kubenswrapper[4619]: E0126 10:57:17.261907 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:57:19 crc kubenswrapper[4619]: I0126 10:57:19.260716 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:57:19 crc kubenswrapper[4619]: I0126 10:57:19.260838 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:57:19 crc kubenswrapper[4619]: I0126 10:57:19.261013 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:57:19 crc kubenswrapper[4619]: E0126 10:57:19.261007 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:57:19 crc kubenswrapper[4619]: I0126 10:57:19.261045 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:57:19 crc kubenswrapper[4619]: E0126 10:57:19.261338 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:57:19 crc kubenswrapper[4619]: E0126 10:57:19.261454 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:57:19 crc kubenswrapper[4619]: E0126 10:57:19.261215 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:57:21 crc kubenswrapper[4619]: I0126 10:57:21.262364 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:57:21 crc kubenswrapper[4619]: E0126 10:57:21.262521 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 10:57:21 crc kubenswrapper[4619]: I0126 10:57:21.262990 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:57:21 crc kubenswrapper[4619]: I0126 10:57:21.263097 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:57:21 crc kubenswrapper[4619]: E0126 10:57:21.263129 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bs2t7" podUID="6a4ef536-778e-47e5-afb2-539e96eba778" Jan 26 10:57:21 crc kubenswrapper[4619]: I0126 10:57:21.263165 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:57:21 crc kubenswrapper[4619]: E0126 10:57:21.263258 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 10:57:21 crc kubenswrapper[4619]: E0126 10:57:21.263340 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 10:57:23 crc kubenswrapper[4619]: I0126 10:57:23.260574 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:57:23 crc kubenswrapper[4619]: I0126 10:57:23.260633 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:57:23 crc kubenswrapper[4619]: I0126 10:57:23.260587 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:57:23 crc kubenswrapper[4619]: I0126 10:57:23.260587 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:57:23 crc kubenswrapper[4619]: I0126 10:57:23.263340 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 10:57:23 crc kubenswrapper[4619]: I0126 10:57:23.263766 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 10:57:23 crc kubenswrapper[4619]: I0126 10:57:23.266283 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 10:57:23 crc kubenswrapper[4619]: I0126 10:57:23.266709 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 10:57:23 crc kubenswrapper[4619]: I0126 10:57:23.267117 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 10:57:23 crc kubenswrapper[4619]: I0126 10:57:23.267606 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.560677 4619 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.619766 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.621040 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.621241 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-th5wb"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.622281 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.625799 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gwzgx"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.626736 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gwzgx" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.632568 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.632669 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.632742 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.633668 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.633832 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.634145 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.634227 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.634497 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.634811 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.639689 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.639998 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.640322 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.640598 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.641063 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.641093 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.642056 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.644216 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hk46v"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.645018 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hk46v" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.645681 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6jj4w"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.646375 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.648048 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bctm2"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.652725 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bctm2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.656369 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.657224 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.657607 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.657865 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.659904 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.660005 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.659920 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.660206 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.660745 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-r8hpj"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.661701 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8hpj" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.666280 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.666451 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.668794 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.669049 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.669318 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.669517 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.669746 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.669881 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.671734 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.674063 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pp656"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.674737 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pp656" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.677712 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.677970 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.678114 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.678331 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.678420 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.679010 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.679236 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.679336 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.679530 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.679902 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.680129 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.680163 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.680384 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.680504 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-llkgb"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.681210 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-llkgb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.682170 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.682335 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.682459 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.682581 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.682715 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.682911 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.700104 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.700842 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-k4prd"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.701030 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.705911 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-k4prd" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.716244 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zc6c9"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.716950 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-zdjpz"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.717443 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zdjpz" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.717870 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zc6c9" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.723158 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.739069 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.739308 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.739630 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.741240 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.743547 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.743754 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41dc8f80-5742-4e4b-943e-571ad0e59027-serving-cert\") pod \"route-controller-manager-6576b87f9c-bctm2\" (UID: \"41dc8f80-5742-4e4b-943e-571ad0e59027\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bctm2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.743898 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/20ef7efa-6ea4-45aa-b18b-af795f8d0758-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-z2sj2\" (UID: \"20ef7efa-6ea4-45aa-b18b-af795f8d0758\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.744018 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt7f9\" (UniqueName: \"kubernetes.io/projected/41dc8f80-5742-4e4b-943e-571ad0e59027-kube-api-access-kt7f9\") pod \"route-controller-manager-6576b87f9c-bctm2\" (UID: \"41dc8f80-5742-4e4b-943e-571ad0e59027\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bctm2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.744137 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-encryption-config\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.744249 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8w5h\" (UniqueName: \"kubernetes.io/projected/81afd38e-4b98-450d-89b1-06efe9f059e8-kube-api-access-j8w5h\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.744370 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0bff9a47-4685-457d-8a24-6139113cdbd8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hk46v\" (UID: \"0bff9a47-4685-457d-8a24-6139113cdbd8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hk46v" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.744513 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20ef7efa-6ea4-45aa-b18b-af795f8d0758-audit-policies\") pod \"apiserver-7bbb656c7d-z2sj2\" (UID: \"20ef7efa-6ea4-45aa-b18b-af795f8d0758\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.744695 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20ef7efa-6ea4-45aa-b18b-af795f8d0758-serving-cert\") pod \"apiserver-7bbb656c7d-z2sj2\" (UID: \"20ef7efa-6ea4-45aa-b18b-af795f8d0758\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.744890 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-config\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.745071 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.745253 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-image-import-ca\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.745368 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.745498 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/20ef7efa-6ea4-45aa-b18b-af795f8d0758-encryption-config\") pod \"apiserver-7bbb656c7d-z2sj2\" (UID: \"20ef7efa-6ea4-45aa-b18b-af795f8d0758\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.745633 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj7rm\" (UniqueName: \"kubernetes.io/projected/20ef7efa-6ea4-45aa-b18b-af795f8d0758-kube-api-access-gj7rm\") pod \"apiserver-7bbb656c7d-z2sj2\" (UID: \"20ef7efa-6ea4-45aa-b18b-af795f8d0758\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.745751 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.745866 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bcc19ee-a154-482d-84f3-3c8aed73db25-serving-cert\") pod \"controller-manager-879f6c89f-gwzgx\" (UID: \"5bcc19ee-a154-482d-84f3-3c8aed73db25\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gwzgx" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.745984 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-serving-cert\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.746097 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41dc8f80-5742-4e4b-943e-571ad0e59027-config\") pod \"route-controller-manager-6576b87f9c-bctm2\" (UID: \"41dc8f80-5742-4e4b-943e-571ad0e59027\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bctm2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.746200 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5bcc19ee-a154-482d-84f3-3c8aed73db25-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-gwzgx\" (UID: \"5bcc19ee-a154-482d-84f3-3c8aed73db25\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gwzgx" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.746303 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-trusted-ca-bundle\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.746399 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.746510 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj89p\" (UniqueName: \"kubernetes.io/projected/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-kube-api-access-kj89p\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.746648 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzst4\" (UniqueName: \"kubernetes.io/projected/5bcc19ee-a154-482d-84f3-3c8aed73db25-kube-api-access-hzst4\") pod \"controller-manager-879f6c89f-gwzgx\" (UID: \"5bcc19ee-a154-482d-84f3-3c8aed73db25\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gwzgx" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.746780 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-audit\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.746902 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0bff9a47-4685-457d-8a24-6139113cdbd8-images\") pod \"machine-api-operator-5694c8668f-hk46v\" (UID: \"0bff9a47-4685-457d-8a24-6139113cdbd8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hk46v" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.747003 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-etcd-client\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.743918 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.745270 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.753146 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bcc19ee-a154-482d-84f3-3c8aed73db25-config\") pod \"controller-manager-879f6c89f-gwzgx\" (UID: \"5bcc19ee-a154-482d-84f3-3c8aed73db25\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gwzgx" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.756533 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bff9a47-4685-457d-8a24-6139113cdbd8-config\") pod \"machine-api-operator-5694c8668f-hk46v\" (UID: \"0bff9a47-4685-457d-8a24-6139113cdbd8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hk46v" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.756562 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.756590 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-etcd-serving-ca\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.756630 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.756668 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.756691 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20ef7efa-6ea4-45aa-b18b-af795f8d0758-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-z2sj2\" (UID: \"20ef7efa-6ea4-45aa-b18b-af795f8d0758\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.756709 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5bcc19ee-a154-482d-84f3-3c8aed73db25-client-ca\") pod \"controller-manager-879f6c89f-gwzgx\" (UID: \"5bcc19ee-a154-482d-84f3-3c8aed73db25\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gwzgx" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.756728 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/20ef7efa-6ea4-45aa-b18b-af795f8d0758-etcd-client\") pod \"apiserver-7bbb656c7d-z2sj2\" (UID: \"20ef7efa-6ea4-45aa-b18b-af795f8d0758\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.756743 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.756761 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.756787 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-audit-dir\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.756803 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-282mj\" (UniqueName: \"kubernetes.io/projected/0bff9a47-4685-457d-8a24-6139113cdbd8-kube-api-access-282mj\") pod \"machine-api-operator-5694c8668f-hk46v\" (UID: \"0bff9a47-4685-457d-8a24-6139113cdbd8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hk46v" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.756819 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/81afd38e-4b98-450d-89b1-06efe9f059e8-audit-dir\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.756841 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41dc8f80-5742-4e4b-943e-571ad0e59027-client-ca\") pod \"route-controller-manager-6576b87f9c-bctm2\" (UID: \"41dc8f80-5742-4e4b-943e-571ad0e59027\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bctm2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.756878 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20ef7efa-6ea4-45aa-b18b-af795f8d0758-audit-dir\") pod \"apiserver-7bbb656c7d-z2sj2\" (UID: \"20ef7efa-6ea4-45aa-b18b-af795f8d0758\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.756905 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/81afd38e-4b98-450d-89b1-06efe9f059e8-audit-policies\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.756929 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.756951 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-node-pullsecrets\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.747326 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.747495 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.749991 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.750125 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.751171 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.751525 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.751934 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.752246 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.752523 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.752750 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.752801 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.752857 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.752893 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.752934 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.752995 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.753065 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.755966 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.762559 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.765153 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6xg9r"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.765605 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.768256 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fqdl"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.768092 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.768219 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6xg9r" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.768765 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fqdl" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.769500 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jtkbh"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.770876 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jtkbh" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.771003 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.781701 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-84lr5"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.782679 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.787690 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5mlf9"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.799374 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-kq9qd"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.800451 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ftrqq"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.800861 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5mlf9" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.801680 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vv9kd"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.802689 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vv9kd" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.804057 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-kq9qd" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.805100 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gwzgx"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.806088 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ftrqq" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.808069 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8qrcl"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.809437 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8qrcl" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.814199 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-98w5w"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.815675 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-98w5w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.836792 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-snrcm"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.837494 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4qxb4"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.837947 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4qxb4" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.838273 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-snrcm" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.841038 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hk46v"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.842525 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g6cvw"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.843090 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6jj4w"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.843196 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g6cvw" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.846185 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-twm6j"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.854974 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twm6j" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.856700 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-t67nv"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.857636 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ac2fdc-a31f-4bb2-91cf-962b10add71c-config\") pod \"machine-approver-56656f9798-r8hpj\" (UID: \"37ac2fdc-a31f-4bb2-91cf-962b10add71c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8hpj" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.857679 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.857703 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-console-config\") pod \"console-f9d7485db-zdjpz\" (UID: \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\") " pod="openshift-console/console-f9d7485db-zdjpz" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.857723 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-image-import-ca\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.857739 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.857764 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/20ef7efa-6ea4-45aa-b18b-af795f8d0758-encryption-config\") pod \"apiserver-7bbb656c7d-z2sj2\" (UID: \"20ef7efa-6ea4-45aa-b18b-af795f8d0758\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.857779 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-trusted-ca-bundle\") pod \"console-f9d7485db-zdjpz\" (UID: \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\") " pod="openshift-console/console-f9d7485db-zdjpz" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.857798 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b65f743-5226-4893-bcbd-41a055959448-config\") pod \"openshift-apiserver-operator-796bbdcf4f-pp656\" (UID: \"5b65f743-5226-4893-bcbd-41a055959448\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pp656" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.857815 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/178692b1-15f3-46ad-b4ca-abc585040e46-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-zc6c9\" (UID: \"178692b1-15f3-46ad-b4ca-abc585040e46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zc6c9" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.857835 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj7rm\" (UniqueName: \"kubernetes.io/projected/20ef7efa-6ea4-45aa-b18b-af795f8d0758-kube-api-access-gj7rm\") pod \"apiserver-7bbb656c7d-z2sj2\" (UID: \"20ef7efa-6ea4-45aa-b18b-af795f8d0758\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.857849 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t67nv" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.857851 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.858303 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1749757b-13ec-4692-8194-0816220d378c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-llkgb\" (UID: \"1749757b-13ec-4692-8194-0816220d378c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-llkgb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.858331 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj2mw\" (UniqueName: \"kubernetes.io/projected/178692b1-15f3-46ad-b4ca-abc585040e46-kube-api-access-fj2mw\") pod \"cluster-image-registry-operator-dc59b4c8b-zc6c9\" (UID: \"178692b1-15f3-46ad-b4ca-abc585040e46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zc6c9" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.858369 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-serving-cert\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.858416 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bcc19ee-a154-482d-84f3-3c8aed73db25-serving-cert\") pod \"controller-manager-879f6c89f-gwzgx\" (UID: \"5bcc19ee-a154-482d-84f3-3c8aed73db25\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gwzgx" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.858445 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41dc8f80-5742-4e4b-943e-571ad0e59027-config\") pod \"route-controller-manager-6576b87f9c-bctm2\" (UID: \"41dc8f80-5742-4e4b-943e-571ad0e59027\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bctm2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.858476 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.858495 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b65f743-5226-4893-bcbd-41a055959448-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-pp656\" (UID: \"5b65f743-5226-4893-bcbd-41a055959448\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pp656" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.858514 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/37ac2fdc-a31f-4bb2-91cf-962b10add71c-machine-approver-tls\") pod \"machine-approver-56656f9798-r8hpj\" (UID: \"37ac2fdc-a31f-4bb2-91cf-962b10add71c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8hpj" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.858537 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5bcc19ee-a154-482d-84f3-3c8aed73db25-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-gwzgx\" (UID: \"5bcc19ee-a154-482d-84f3-3c8aed73db25\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gwzgx" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.858557 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-trusted-ca-bundle\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.858577 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj89p\" (UniqueName: \"kubernetes.io/projected/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-kube-api-access-kj89p\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.858598 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9grpb\" (UniqueName: \"kubernetes.io/projected/5b65f743-5226-4893-bcbd-41a055959448-kube-api-access-9grpb\") pod \"openshift-apiserver-operator-796bbdcf4f-pp656\" (UID: \"5b65f743-5226-4893-bcbd-41a055959448\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pp656" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.858637 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqptw\" (UniqueName: \"kubernetes.io/projected/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-kube-api-access-wqptw\") pod \"console-f9d7485db-zdjpz\" (UID: \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\") " pod="openshift-console/console-f9d7485db-zdjpz" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.858659 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzst4\" (UniqueName: \"kubernetes.io/projected/5bcc19ee-a154-482d-84f3-3c8aed73db25-kube-api-access-hzst4\") pod \"controller-manager-879f6c89f-gwzgx\" (UID: \"5bcc19ee-a154-482d-84f3-3c8aed73db25\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gwzgx" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.858678 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-audit\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.858700 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0bff9a47-4685-457d-8a24-6139113cdbd8-images\") pod \"machine-api-operator-5694c8668f-hk46v\" (UID: \"0bff9a47-4685-457d-8a24-6139113cdbd8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hk46v" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.858718 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-etcd-client\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.858736 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm87g\" (UniqueName: \"kubernetes.io/projected/1749757b-13ec-4692-8194-0816220d378c-kube-api-access-hm87g\") pod \"openshift-config-operator-7777fb866f-llkgb\" (UID: \"1749757b-13ec-4692-8194-0816220d378c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-llkgb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.858761 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bcc19ee-a154-482d-84f3-3c8aed73db25-config\") pod \"controller-manager-879f6c89f-gwzgx\" (UID: \"5bcc19ee-a154-482d-84f3-3c8aed73db25\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gwzgx" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.858780 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bff9a47-4685-457d-8a24-6139113cdbd8-config\") pod \"machine-api-operator-5694c8668f-hk46v\" (UID: \"0bff9a47-4685-457d-8a24-6139113cdbd8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hk46v" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.858797 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.858816 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-console-serving-cert\") pod \"console-f9d7485db-zdjpz\" (UID: \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\") " pod="openshift-console/console-f9d7485db-zdjpz" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.858836 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5wkf\" (UniqueName: \"kubernetes.io/projected/97a177f5-24c5-4f7e-9bc6-8c234fe0cf19-kube-api-access-g5wkf\") pod \"downloads-7954f5f757-k4prd\" (UID: \"97a177f5-24c5-4f7e-9bc6-8c234fe0cf19\") " pod="openshift-console/downloads-7954f5f757-k4prd" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.858854 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e239a69-8537-46b7-a0e0-30d8382b4e22-config\") pod \"kube-apiserver-operator-766d6c64bb-g6cvw\" (UID: \"0e239a69-8537-46b7-a0e0-30d8382b4e22\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g6cvw" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.858876 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-etcd-serving-ca\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.858913 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.858944 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.858963 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20ef7efa-6ea4-45aa-b18b-af795f8d0758-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-z2sj2\" (UID: \"20ef7efa-6ea4-45aa-b18b-af795f8d0758\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.858983 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5bcc19ee-a154-482d-84f3-3c8aed73db25-client-ca\") pod \"controller-manager-879f6c89f-gwzgx\" (UID: \"5bcc19ee-a154-482d-84f3-3c8aed73db25\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gwzgx" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.859003 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-console-oauth-config\") pod \"console-f9d7485db-zdjpz\" (UID: \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\") " pod="openshift-console/console-f9d7485db-zdjpz" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.859026 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54x9s\" (UniqueName: \"kubernetes.io/projected/37ac2fdc-a31f-4bb2-91cf-962b10add71c-kube-api-access-54x9s\") pod \"machine-approver-56656f9798-r8hpj\" (UID: \"37ac2fdc-a31f-4bb2-91cf-962b10add71c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8hpj" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.859045 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2g9x\" (UniqueName: \"kubernetes.io/projected/554090f8-56c3-48d5-ab2e-082c8038b972-kube-api-access-j2g9x\") pod \"dns-operator-744455d44c-kq9qd\" (UID: \"554090f8-56c3-48d5-ab2e-082c8038b972\") " pod="openshift-dns-operator/dns-operator-744455d44c-kq9qd" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.859064 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.859084 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.859110 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/20ef7efa-6ea4-45aa-b18b-af795f8d0758-etcd-client\") pod \"apiserver-7bbb656c7d-z2sj2\" (UID: \"20ef7efa-6ea4-45aa-b18b-af795f8d0758\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.859127 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/178692b1-15f3-46ad-b4ca-abc585040e46-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-zc6c9\" (UID: \"178692b1-15f3-46ad-b4ca-abc585040e46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zc6c9" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.859145 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/554090f8-56c3-48d5-ab2e-082c8038b972-metrics-tls\") pod \"dns-operator-744455d44c-kq9qd\" (UID: \"554090f8-56c3-48d5-ab2e-082c8038b972\") " pod="openshift-dns-operator/dns-operator-744455d44c-kq9qd" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.859172 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-audit-dir\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.859192 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-282mj\" (UniqueName: \"kubernetes.io/projected/0bff9a47-4685-457d-8a24-6139113cdbd8-kube-api-access-282mj\") pod \"machine-api-operator-5694c8668f-hk46v\" (UID: \"0bff9a47-4685-457d-8a24-6139113cdbd8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hk46v" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.859209 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/81afd38e-4b98-450d-89b1-06efe9f059e8-audit-dir\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.859228 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41dc8f80-5742-4e4b-943e-571ad0e59027-client-ca\") pod \"route-controller-manager-6576b87f9c-bctm2\" (UID: \"41dc8f80-5742-4e4b-943e-571ad0e59027\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bctm2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.859259 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20ef7efa-6ea4-45aa-b18b-af795f8d0758-audit-dir\") pod \"apiserver-7bbb656c7d-z2sj2\" (UID: \"20ef7efa-6ea4-45aa-b18b-af795f8d0758\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.859281 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/81afd38e-4b98-450d-89b1-06efe9f059e8-audit-policies\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.859299 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.859318 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-node-pullsecrets\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.859335 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e239a69-8537-46b7-a0e0-30d8382b4e22-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-g6cvw\" (UID: \"0e239a69-8537-46b7-a0e0-30d8382b4e22\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g6cvw" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.859352 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.859370 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41dc8f80-5742-4e4b-943e-571ad0e59027-serving-cert\") pod \"route-controller-manager-6576b87f9c-bctm2\" (UID: \"41dc8f80-5742-4e4b-943e-571ad0e59027\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bctm2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.859390 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-oauth-serving-cert\") pod \"console-f9d7485db-zdjpz\" (UID: \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\") " pod="openshift-console/console-f9d7485db-zdjpz" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.859407 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/20ef7efa-6ea4-45aa-b18b-af795f8d0758-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-z2sj2\" (UID: \"20ef7efa-6ea4-45aa-b18b-af795f8d0758\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.859437 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt7f9\" (UniqueName: \"kubernetes.io/projected/41dc8f80-5742-4e4b-943e-571ad0e59027-kube-api-access-kt7f9\") pod \"route-controller-manager-6576b87f9c-bctm2\" (UID: \"41dc8f80-5742-4e4b-943e-571ad0e59027\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bctm2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.859455 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-service-ca\") pod \"console-f9d7485db-zdjpz\" (UID: \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\") " pod="openshift-console/console-f9d7485db-zdjpz" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.859475 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-encryption-config\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.859496 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37ac2fdc-a31f-4bb2-91cf-962b10add71c-auth-proxy-config\") pod \"machine-approver-56656f9798-r8hpj\" (UID: \"37ac2fdc-a31f-4bb2-91cf-962b10add71c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8hpj" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.859518 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1749757b-13ec-4692-8194-0816220d378c-serving-cert\") pod \"openshift-config-operator-7777fb866f-llkgb\" (UID: \"1749757b-13ec-4692-8194-0816220d378c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-llkgb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.859539 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8w5h\" (UniqueName: \"kubernetes.io/projected/81afd38e-4b98-450d-89b1-06efe9f059e8-kube-api-access-j8w5h\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.859559 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/178692b1-15f3-46ad-b4ca-abc585040e46-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-zc6c9\" (UID: \"178692b1-15f3-46ad-b4ca-abc585040e46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zc6c9" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.859588 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0bff9a47-4685-457d-8a24-6139113cdbd8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hk46v\" (UID: \"0bff9a47-4685-457d-8a24-6139113cdbd8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hk46v" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.861236 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xg2bk"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.861740 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-k2svc"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.862034 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vpbzh"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.862436 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vpbzh" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.859627 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20ef7efa-6ea4-45aa-b18b-af795f8d0758-audit-policies\") pod \"apiserver-7bbb656c7d-z2sj2\" (UID: \"20ef7efa-6ea4-45aa-b18b-af795f8d0758\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.862725 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20ef7efa-6ea4-45aa-b18b-af795f8d0758-serving-cert\") pod \"apiserver-7bbb656c7d-z2sj2\" (UID: \"20ef7efa-6ea4-45aa-b18b-af795f8d0758\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.862763 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e239a69-8537-46b7-a0e0-30d8382b4e22-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-g6cvw\" (UID: \"0e239a69-8537-46b7-a0e0-30d8382b4e22\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g6cvw" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.862796 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-config\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.863503 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-config\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.865863 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l54p9"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.866468 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490405-2tm2l"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.871815 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xg2bk" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.872511 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-k2svc" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.873855 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0bff9a47-4685-457d-8a24-6139113cdbd8-images\") pod \"machine-api-operator-5694c8668f-hk46v\" (UID: \"0bff9a47-4685-457d-8a24-6139113cdbd8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hk46v" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.884216 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.885083 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bcc19ee-a154-482d-84f3-3c8aed73db25-serving-cert\") pod \"controller-manager-879f6c89f-gwzgx\" (UID: \"5bcc19ee-a154-482d-84f3-3c8aed73db25\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gwzgx" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.886981 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.887882 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-image-import-ca\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.889271 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41dc8f80-5742-4e4b-943e-571ad0e59027-config\") pod \"route-controller-manager-6576b87f9c-bctm2\" (UID: \"41dc8f80-5742-4e4b-943e-571ad0e59027\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bctm2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.891253 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l54p9" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.893798 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bff9a47-4685-457d-8a24-6139113cdbd8-config\") pod \"machine-api-operator-5694c8668f-hk46v\" (UID: \"0bff9a47-4685-457d-8a24-6139113cdbd8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hk46v" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.893247 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bcc19ee-a154-482d-84f3-3c8aed73db25-config\") pod \"controller-manager-879f6c89f-gwzgx\" (UID: \"5bcc19ee-a154-482d-84f3-3c8aed73db25\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gwzgx" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.891572 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5bcc19ee-a154-482d-84f3-3c8aed73db25-client-ca\") pod \"controller-manager-879f6c89f-gwzgx\" (UID: \"5bcc19ee-a154-482d-84f3-3c8aed73db25\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gwzgx" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.894308 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.894803 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-etcd-serving-ca\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.895600 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/20ef7efa-6ea4-45aa-b18b-af795f8d0758-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-z2sj2\" (UID: \"20ef7efa-6ea4-45aa-b18b-af795f8d0758\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.901372 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pn8z7"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.901876 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-z2hch"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.902348 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kb2jr"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.902944 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kb2jr" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.903271 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490405-2tm2l" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.903408 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsq5c"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.904224 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsq5c" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.912248 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pn8z7" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.931070 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-xr254"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.931571 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-audit-dir\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.931774 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20ef7efa-6ea4-45aa-b18b-af795f8d0758-audit-policies\") pod \"apiserver-7bbb656c7d-z2sj2\" (UID: \"20ef7efa-6ea4-45aa-b18b-af795f8d0758\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.931790 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/81afd38e-4b98-450d-89b1-06efe9f059e8-audit-dir\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.931798 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-xr254" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.931881 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.932499 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41dc8f80-5742-4e4b-943e-571ad0e59027-serving-cert\") pod \"route-controller-manager-6576b87f9c-bctm2\" (UID: \"41dc8f80-5742-4e4b-943e-571ad0e59027\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bctm2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.932702 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41dc8f80-5742-4e4b-943e-571ad0e59027-client-ca\") pod \"route-controller-manager-6576b87f9c-bctm2\" (UID: \"41dc8f80-5742-4e4b-943e-571ad0e59027\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bctm2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.932756 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20ef7efa-6ea4-45aa-b18b-af795f8d0758-audit-dir\") pod \"apiserver-7bbb656c7d-z2sj2\" (UID: \"20ef7efa-6ea4-45aa-b18b-af795f8d0758\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.932918 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.932977 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.933295 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/81afd38e-4b98-450d-89b1-06efe9f059e8-audit-policies\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.933314 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.933402 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-node-pullsecrets\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.933418 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-serving-cert\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.933603 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.934005 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.934566 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5bcc19ee-a154-482d-84f3-3c8aed73db25-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-gwzgx\" (UID: \"5bcc19ee-a154-482d-84f3-3c8aed73db25\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gwzgx" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.935583 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-encryption-config\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.935818 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jdtp"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.936196 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.936544 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.936773 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pp656"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.936907 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jdtp" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.937061 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z2hch" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.937494 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-shltb"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.938066 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-audit\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.938915 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.940298 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-shltb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.916837 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20ef7efa-6ea4-45aa-b18b-af795f8d0758-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-z2sj2\" (UID: \"20ef7efa-6ea4-45aa-b18b-af795f8d0758\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.942397 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-etcd-client\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.948709 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/20ef7efa-6ea4-45aa-b18b-af795f8d0758-encryption-config\") pod \"apiserver-7bbb656c7d-z2sj2\" (UID: \"20ef7efa-6ea4-45aa-b18b-af795f8d0758\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.951403 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/20ef7efa-6ea4-45aa-b18b-af795f8d0758-etcd-client\") pod \"apiserver-7bbb656c7d-z2sj2\" (UID: \"20ef7efa-6ea4-45aa-b18b-af795f8d0758\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.952977 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.954603 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.955657 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-llkgb"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.961608 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20ef7efa-6ea4-45aa-b18b-af795f8d0758-serving-cert\") pod \"apiserver-7bbb656c7d-z2sj2\" (UID: \"20ef7efa-6ea4-45aa-b18b-af795f8d0758\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.961892 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.962203 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.962373 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.962460 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.962662 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.962749 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.963282 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.963530 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.966816 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.973989 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.975024 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.975470 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.967379 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0bff9a47-4685-457d-8a24-6139113cdbd8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hk46v\" (UID: \"0bff9a47-4685-457d-8a24-6139113cdbd8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hk46v" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.967441 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.977062 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-trusted-ca-bundle\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.977518 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.978206 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.983124 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.983482 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.983688 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.983894 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.984386 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.986514 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.987052 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.987156 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.986584 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.987522 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.978748 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-k4prd"] Jan 26 10:57:25 crc kubenswrapper[4619]: I0126 10:57:25.993093 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-th5wb"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:25.998878 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zdjpz"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.003478 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-trusted-ca-bundle\") pod \"console-f9d7485db-zdjpz\" (UID: \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\") " pod="openshift-console/console-f9d7485db-zdjpz" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.003554 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b65f743-5226-4893-bcbd-41a055959448-config\") pod \"openshift-apiserver-operator-796bbdcf4f-pp656\" (UID: \"5b65f743-5226-4893-bcbd-41a055959448\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pp656" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.003721 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/178692b1-15f3-46ad-b4ca-abc585040e46-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-zc6c9\" (UID: \"178692b1-15f3-46ad-b4ca-abc585040e46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zc6c9" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.003767 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1749757b-13ec-4692-8194-0816220d378c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-llkgb\" (UID: \"1749757b-13ec-4692-8194-0816220d378c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-llkgb" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.003891 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj2mw\" (UniqueName: \"kubernetes.io/projected/178692b1-15f3-46ad-b4ca-abc585040e46-kube-api-access-fj2mw\") pod \"cluster-image-registry-operator-dc59b4c8b-zc6c9\" (UID: \"178692b1-15f3-46ad-b4ca-abc585040e46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zc6c9" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.004090 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b65f743-5226-4893-bcbd-41a055959448-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-pp656\" (UID: \"5b65f743-5226-4893-bcbd-41a055959448\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pp656" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.004201 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/37ac2fdc-a31f-4bb2-91cf-962b10add71c-machine-approver-tls\") pod \"machine-approver-56656f9798-r8hpj\" (UID: \"37ac2fdc-a31f-4bb2-91cf-962b10add71c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8hpj" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.004351 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9grpb\" (UniqueName: \"kubernetes.io/projected/5b65f743-5226-4893-bcbd-41a055959448-kube-api-access-9grpb\") pod \"openshift-apiserver-operator-796bbdcf4f-pp656\" (UID: \"5b65f743-5226-4893-bcbd-41a055959448\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pp656" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.004449 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqptw\" (UniqueName: \"kubernetes.io/projected/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-kube-api-access-wqptw\") pod \"console-f9d7485db-zdjpz\" (UID: \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\") " pod="openshift-console/console-f9d7485db-zdjpz" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.004586 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm87g\" (UniqueName: \"kubernetes.io/projected/1749757b-13ec-4692-8194-0816220d378c-kube-api-access-hm87g\") pod \"openshift-config-operator-7777fb866f-llkgb\" (UID: \"1749757b-13ec-4692-8194-0816220d378c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-llkgb" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.004646 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-console-serving-cert\") pod \"console-f9d7485db-zdjpz\" (UID: \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\") " pod="openshift-console/console-f9d7485db-zdjpz" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.004676 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5wkf\" (UniqueName: \"kubernetes.io/projected/97a177f5-24c5-4f7e-9bc6-8c234fe0cf19-kube-api-access-g5wkf\") pod \"downloads-7954f5f757-k4prd\" (UID: \"97a177f5-24c5-4f7e-9bc6-8c234fe0cf19\") " pod="openshift-console/downloads-7954f5f757-k4prd" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.004726 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e239a69-8537-46b7-a0e0-30d8382b4e22-config\") pod \"kube-apiserver-operator-766d6c64bb-g6cvw\" (UID: \"0e239a69-8537-46b7-a0e0-30d8382b4e22\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g6cvw" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.004770 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-console-oauth-config\") pod \"console-f9d7485db-zdjpz\" (UID: \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\") " pod="openshift-console/console-f9d7485db-zdjpz" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.004817 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54x9s\" (UniqueName: \"kubernetes.io/projected/37ac2fdc-a31f-4bb2-91cf-962b10add71c-kube-api-access-54x9s\") pod \"machine-approver-56656f9798-r8hpj\" (UID: \"37ac2fdc-a31f-4bb2-91cf-962b10add71c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8hpj" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.004846 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2g9x\" (UniqueName: \"kubernetes.io/projected/554090f8-56c3-48d5-ab2e-082c8038b972-kube-api-access-j2g9x\") pod \"dns-operator-744455d44c-kq9qd\" (UID: \"554090f8-56c3-48d5-ab2e-082c8038b972\") " pod="openshift-dns-operator/dns-operator-744455d44c-kq9qd" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.004894 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/178692b1-15f3-46ad-b4ca-abc585040e46-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-zc6c9\" (UID: \"178692b1-15f3-46ad-b4ca-abc585040e46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zc6c9" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.004922 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/554090f8-56c3-48d5-ab2e-082c8038b972-metrics-tls\") pod \"dns-operator-744455d44c-kq9qd\" (UID: \"554090f8-56c3-48d5-ab2e-082c8038b972\") " pod="openshift-dns-operator/dns-operator-744455d44c-kq9qd" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.005021 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e239a69-8537-46b7-a0e0-30d8382b4e22-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-g6cvw\" (UID: \"0e239a69-8537-46b7-a0e0-30d8382b4e22\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g6cvw" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.005075 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-service-ca\") pod \"console-f9d7485db-zdjpz\" (UID: \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\") " pod="openshift-console/console-f9d7485db-zdjpz" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.005105 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-oauth-serving-cert\") pod \"console-f9d7485db-zdjpz\" (UID: \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\") " pod="openshift-console/console-f9d7485db-zdjpz" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.005347 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37ac2fdc-a31f-4bb2-91cf-962b10add71c-auth-proxy-config\") pod \"machine-approver-56656f9798-r8hpj\" (UID: \"37ac2fdc-a31f-4bb2-91cf-962b10add71c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8hpj" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.005405 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1749757b-13ec-4692-8194-0816220d378c-serving-cert\") pod \"openshift-config-operator-7777fb866f-llkgb\" (UID: \"1749757b-13ec-4692-8194-0816220d378c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-llkgb" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.005471 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/178692b1-15f3-46ad-b4ca-abc585040e46-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-zc6c9\" (UID: \"178692b1-15f3-46ad-b4ca-abc585040e46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zc6c9" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.005509 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e239a69-8537-46b7-a0e0-30d8382b4e22-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-g6cvw\" (UID: \"0e239a69-8537-46b7-a0e0-30d8382b4e22\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g6cvw" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.005565 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ac2fdc-a31f-4bb2-91cf-962b10add71c-config\") pod \"machine-approver-56656f9798-r8hpj\" (UID: \"37ac2fdc-a31f-4bb2-91cf-962b10add71c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8hpj" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.005596 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-console-config\") pod \"console-f9d7485db-zdjpz\" (UID: \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\") " pod="openshift-console/console-f9d7485db-zdjpz" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.006556 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-trusted-ca-bundle\") pod \"console-f9d7485db-zdjpz\" (UID: \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\") " pod="openshift-console/console-f9d7485db-zdjpz" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.007067 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-console-config\") pod \"console-f9d7485db-zdjpz\" (UID: \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\") " pod="openshift-console/console-f9d7485db-zdjpz" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.008558 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b65f743-5226-4893-bcbd-41a055959448-config\") pod \"openshift-apiserver-operator-796bbdcf4f-pp656\" (UID: \"5b65f743-5226-4893-bcbd-41a055959448\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pp656" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.009099 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1749757b-13ec-4692-8194-0816220d378c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-llkgb\" (UID: \"1749757b-13ec-4692-8194-0816220d378c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-llkgb" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.009545 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.012514 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37ac2fdc-a31f-4bb2-91cf-962b10add71c-auth-proxy-config\") pod \"machine-approver-56656f9798-r8hpj\" (UID: \"37ac2fdc-a31f-4bb2-91cf-962b10add71c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8hpj" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.013289 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-service-ca\") pod \"console-f9d7485db-zdjpz\" (UID: \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\") " pod="openshift-console/console-f9d7485db-zdjpz" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.013850 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-oauth-serving-cert\") pod \"console-f9d7485db-zdjpz\" (UID: \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\") " pod="openshift-console/console-f9d7485db-zdjpz" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.022204 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1749757b-13ec-4692-8194-0816220d378c-serving-cert\") pod \"openshift-config-operator-7777fb866f-llkgb\" (UID: \"1749757b-13ec-4692-8194-0816220d378c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-llkgb" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.023378 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-console-oauth-config\") pod \"console-f9d7485db-zdjpz\" (UID: \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\") " pod="openshift-console/console-f9d7485db-zdjpz" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.024477 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ac2fdc-a31f-4bb2-91cf-962b10add71c-config\") pod \"machine-approver-56656f9798-r8hpj\" (UID: \"37ac2fdc-a31f-4bb2-91cf-962b10add71c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8hpj" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.024553 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.027078 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.029188 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.031977 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-console-serving-cert\") pod \"console-f9d7485db-zdjpz\" (UID: \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\") " pod="openshift-console/console-f9d7485db-zdjpz" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.036516 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.038351 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/178692b1-15f3-46ad-b4ca-abc585040e46-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-zc6c9\" (UID: \"178692b1-15f3-46ad-b4ca-abc585040e46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zc6c9" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.043131 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.043698 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b65f743-5226-4893-bcbd-41a055959448-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-pp656\" (UID: \"5b65f743-5226-4893-bcbd-41a055959448\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pp656" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.044015 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.044715 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/37ac2fdc-a31f-4bb2-91cf-962b10add71c-machine-approver-tls\") pod \"machine-approver-56656f9798-r8hpj\" (UID: \"37ac2fdc-a31f-4bb2-91cf-962b10add71c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8hpj" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.045102 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/178692b1-15f3-46ad-b4ca-abc585040e46-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-zc6c9\" (UID: \"178692b1-15f3-46ad-b4ca-abc585040e46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zc6c9" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.045220 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fqdl"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.045565 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-kq9qd"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.047042 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zc6c9"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.048401 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8qrcl"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.049661 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ftrqq"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.050670 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5mlf9"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.050949 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.052675 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-t2xxs"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.053412 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-t2xxs" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.055114 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-shltb"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.055564 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xg2bk"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.057654 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kb2jr"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.058775 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/554090f8-56c3-48d5-ab2e-082c8038b972-metrics-tls\") pod \"dns-operator-744455d44c-kq9qd\" (UID: \"554090f8-56c3-48d5-ab2e-082c8038b972\") " pod="openshift-dns-operator/dns-operator-744455d44c-kq9qd" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.058843 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-8k974"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.059737 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-8k974" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.059959 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4qxb4"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.061901 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bctm2"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.064080 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-84lr5"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.064136 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vpbzh"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.066070 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-t67nv"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.069102 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.069245 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsq5c"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.074956 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6xg9r"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.076504 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jtkbh"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.081879 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490405-2tm2l"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.083447 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.085866 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-ff7zj"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.087224 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vv9kd"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.087356 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-ff7zj" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.087788 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l54p9"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.090377 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jdtp"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.091242 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-ff7zj"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.092281 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-k2svc"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.094298 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-98w5w"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.095123 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-twm6j"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.097034 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-t2xxs"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.100467 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g6cvw"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.101492 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pn8z7"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.102792 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-z2hch"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.104764 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.105485 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-8k974"] Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.124781 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.145370 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.164743 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.185975 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.204586 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.224812 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.244397 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.264413 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.284143 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.304850 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.324429 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.344056 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.364928 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.384646 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.404790 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.424846 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.444796 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.464331 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.485559 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.505268 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.524758 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.544959 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.560768 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.563702 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.583824 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.605141 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.625063 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.645428 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.658787 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e239a69-8537-46b7-a0e0-30d8382b4e22-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-g6cvw\" (UID: \"0e239a69-8537-46b7-a0e0-30d8382b4e22\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g6cvw" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.665197 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.670368 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e239a69-8537-46b7-a0e0-30d8382b4e22-config\") pod \"kube-apiserver-operator-766d6c64bb-g6cvw\" (UID: \"0e239a69-8537-46b7-a0e0-30d8382b4e22\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g6cvw" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.684010 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.705032 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.724500 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.765703 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.785108 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.805327 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.825719 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.844583 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.865661 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.883007 4619 request.go:700] Waited for 1.010252326s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dsigning-key&limit=500&resourceVersion=0 Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.885987 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.904779 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.925727 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.945507 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.966038 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 10:57:26 crc kubenswrapper[4619]: I0126 10:57:26.984947 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.006489 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.019738 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:27 crc kubenswrapper[4619]: E0126 10:57:27.019960 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:59:29.019926284 +0000 UTC m=+268.053967000 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.020196 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.023805 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.026969 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.045607 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.065112 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.087012 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.107828 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.122052 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.122220 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.122282 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.124913 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.125735 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.130661 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.130896 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.164450 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt7f9\" (UniqueName: \"kubernetes.io/projected/41dc8f80-5742-4e4b-943e-571ad0e59027-kube-api-access-kt7f9\") pod \"route-controller-manager-6576b87f9c-bctm2\" (UID: \"41dc8f80-5742-4e4b-943e-571ad0e59027\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bctm2" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.164849 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.184719 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.191251 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.202584 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.214842 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.231774 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8w5h\" (UniqueName: \"kubernetes.io/projected/81afd38e-4b98-450d-89b1-06efe9f059e8-kube-api-access-j8w5h\") pod \"oauth-openshift-558db77b4-6jj4w\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.253699 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj7rm\" (UniqueName: \"kubernetes.io/projected/20ef7efa-6ea4-45aa-b18b-af795f8d0758-kube-api-access-gj7rm\") pod \"apiserver-7bbb656c7d-z2sj2\" (UID: \"20ef7efa-6ea4-45aa-b18b-af795f8d0758\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.261302 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-282mj\" (UniqueName: \"kubernetes.io/projected/0bff9a47-4685-457d-8a24-6139113cdbd8-kube-api-access-282mj\") pod \"machine-api-operator-5694c8668f-hk46v\" (UID: \"0bff9a47-4685-457d-8a24-6139113cdbd8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hk46v" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.264887 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.267526 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.270819 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bctm2" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.287488 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.325013 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.339548 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj89p\" (UniqueName: \"kubernetes.io/projected/1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689-kube-api-access-kj89p\") pod \"apiserver-76f77b778f-th5wb\" (UID: \"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689\") " pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.345399 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.365032 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.385138 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.404758 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.425882 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.445513 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.456944 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.474806 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.495786 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.508285 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.512086 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzst4\" (UniqueName: \"kubernetes.io/projected/5bcc19ee-a154-482d-84f3-3c8aed73db25-kube-api-access-hzst4\") pod \"controller-manager-879f6c89f-gwzgx\" (UID: \"5bcc19ee-a154-482d-84f3-3c8aed73db25\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gwzgx" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.525298 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.541555 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hk46v" Jan 26 10:57:27 crc kubenswrapper[4619]: W0126 10:57:27.542031 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-6a28ac241d2466d6f0fa7ca69ff987a6c1d8b568384d506d80b3e36f7927c7c2 WatchSource:0}: Error finding container 6a28ac241d2466d6f0fa7ca69ff987a6c1d8b568384d506d80b3e36f7927c7c2: Status 404 returned error can't find the container with id 6a28ac241d2466d6f0fa7ca69ff987a6c1d8b568384d506d80b3e36f7927c7c2 Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.544386 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 10:57:27 crc kubenswrapper[4619]: W0126 10:57:27.558277 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-b327121360848ca2e72737f56ce49a6362d516739c96e34d4373f88002f60b98 WatchSource:0}: Error finding container b327121360848ca2e72737f56ce49a6362d516739c96e34d4373f88002f60b98: Status 404 returned error can't find the container with id b327121360848ca2e72737f56ce49a6362d516739c96e34d4373f88002f60b98 Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.564513 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.584954 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.587125 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6jj4w"] Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.608250 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.614121 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bctm2"] Jan 26 10:57:27 crc kubenswrapper[4619]: W0126 10:57:27.629243 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41dc8f80_5742_4e4b_943e_571ad0e59027.slice/crio-b00b29b64c7a15b35ae5232c8f63469f95fd6bbe2850b904aeb567299ef45911 WatchSource:0}: Error finding container b00b29b64c7a15b35ae5232c8f63469f95fd6bbe2850b904aeb567299ef45911: Status 404 returned error can't find the container with id b00b29b64c7a15b35ae5232c8f63469f95fd6bbe2850b904aeb567299ef45911 Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.666919 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54x9s\" (UniqueName: \"kubernetes.io/projected/37ac2fdc-a31f-4bb2-91cf-962b10add71c-kube-api-access-54x9s\") pod \"machine-approver-56656f9798-r8hpj\" (UID: \"37ac2fdc-a31f-4bb2-91cf-962b10add71c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8hpj" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.701666 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2g9x\" (UniqueName: \"kubernetes.io/projected/554090f8-56c3-48d5-ab2e-082c8038b972-kube-api-access-j2g9x\") pod \"dns-operator-744455d44c-kq9qd\" (UID: \"554090f8-56c3-48d5-ab2e-082c8038b972\") " pod="openshift-dns-operator/dns-operator-744455d44c-kq9qd" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.709228 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj2mw\" (UniqueName: \"kubernetes.io/projected/178692b1-15f3-46ad-b4ca-abc585040e46-kube-api-access-fj2mw\") pod \"cluster-image-registry-operator-dc59b4c8b-zc6c9\" (UID: \"178692b1-15f3-46ad-b4ca-abc585040e46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zc6c9" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.724976 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2"] Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.725443 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e239a69-8537-46b7-a0e0-30d8382b4e22-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-g6cvw\" (UID: \"0e239a69-8537-46b7-a0e0-30d8382b4e22\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g6cvw" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.741597 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/178692b1-15f3-46ad-b4ca-abc585040e46-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-zc6c9\" (UID: \"178692b1-15f3-46ad-b4ca-abc585040e46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zc6c9" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.762537 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm87g\" (UniqueName: \"kubernetes.io/projected/1749757b-13ec-4692-8194-0816220d378c-kube-api-access-hm87g\") pod \"openshift-config-operator-7777fb866f-llkgb\" (UID: \"1749757b-13ec-4692-8194-0816220d378c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-llkgb" Jan 26 10:57:27 crc kubenswrapper[4619]: W0126 10:57:27.776900 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20ef7efa_6ea4_45aa_b18b_af795f8d0758.slice/crio-e9293a94961a231c6d774016b80ecca7ee4ef373bc0ef69d896eef408f454574 WatchSource:0}: Error finding container e9293a94961a231c6d774016b80ecca7ee4ef373bc0ef69d896eef408f454574: Status 404 returned error can't find the container with id e9293a94961a231c6d774016b80ecca7ee4ef373bc0ef69d896eef408f454574 Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.780480 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zc6c9" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.781998 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9grpb\" (UniqueName: \"kubernetes.io/projected/5b65f743-5226-4893-bcbd-41a055959448-kube-api-access-9grpb\") pod \"openshift-apiserver-operator-796bbdcf4f-pp656\" (UID: \"5b65f743-5226-4893-bcbd-41a055959448\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pp656" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.808763 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gwzgx" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.821349 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqptw\" (UniqueName: \"kubernetes.io/projected/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-kube-api-access-wqptw\") pod \"console-f9d7485db-zdjpz\" (UID: \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\") " pod="openshift-console/console-f9d7485db-zdjpz" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.822112 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5wkf\" (UniqueName: \"kubernetes.io/projected/97a177f5-24c5-4f7e-9bc6-8c234fe0cf19-kube-api-access-g5wkf\") pod \"downloads-7954f5f757-k4prd\" (UID: \"97a177f5-24c5-4f7e-9bc6-8c234fe0cf19\") " pod="openshift-console/downloads-7954f5f757-k4prd" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.822438 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-th5wb"] Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.827378 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.844808 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.865809 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hk46v"] Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.866140 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.884785 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.894912 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-kq9qd" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.903150 4619 request.go:700] Waited for 1.843156993s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&limit=500&resourceVersion=0 Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.905289 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.922466 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8hpj" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.923892 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.946124 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pp656" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.952920 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.953109 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-llkgb" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.953197 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g6cvw" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.964963 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.970556 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-k4prd" Jan 26 10:57:27 crc kubenswrapper[4619]: I0126 10:57:27.984424 4619 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.003276 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.491912 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zdjpz" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.499033 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjmc9\" (UniqueName: \"kubernetes.io/projected/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-kube-api-access-tjmc9\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.499086 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.499113 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-registry-certificates\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.499151 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-registry-tls\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.499357 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-bound-sa-token\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.499458 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.499578 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-trusted-ca\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.499768 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:28 crc kubenswrapper[4619]: E0126 10:57:28.510987 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:29.010957416 +0000 UTC m=+148.044998132 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.532041 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b327121360848ca2e72737f56ce49a6362d516739c96e34d4373f88002f60b98"} Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.538833 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"6a28ac241d2466d6f0fa7ca69ff987a6c1d8b568384d506d80b3e36f7927c7c2"} Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.546006 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" event={"ID":"20ef7efa-6ea4-45aa-b18b-af795f8d0758","Type":"ContainerStarted","Data":"e9293a94961a231c6d774016b80ecca7ee4ef373bc0ef69d896eef408f454574"} Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.570983 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" event={"ID":"81afd38e-4b98-450d-89b1-06efe9f059e8","Type":"ContainerStarted","Data":"c31b8e116cad99a33440d2d96143a07b4a137850bebd5b3c70d076ef264f0a13"} Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.603363 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"10ce0b3bf2e8fca0fa3bcadc302efeaab7a609363648b5f22edc8a629fb64de9"} Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.604842 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bctm2" event={"ID":"41dc8f80-5742-4e4b-943e-571ad0e59027","Type":"ContainerStarted","Data":"b00b29b64c7a15b35ae5232c8f63469f95fd6bbe2850b904aeb567299ef45911"} Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.606303 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bctm2" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.606833 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:28 crc kubenswrapper[4619]: E0126 10:57:28.606977 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:29.106942434 +0000 UTC m=+148.140983150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.608543 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/424db312-75a1-4a4e-954d-4100001cb1ef-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ftrqq\" (UID: \"424db312-75a1-4a4e-954d-4100001cb1ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ftrqq" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.608581 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvs6f\" (UniqueName: \"kubernetes.io/projected/3865e56a-bf16-4cd0-b7c1-f74e98381c5b-kube-api-access-jvs6f\") pod \"router-default-5444994796-snrcm\" (UID: \"3865e56a-bf16-4cd0-b7c1-f74e98381c5b\") " pod="openshift-ingress/router-default-5444994796-snrcm" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.608598 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7fd097cd-fa31-45f4-bd1d-f925d43f05cc-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-t67nv\" (UID: \"7fd097cd-fa31-45f4-bd1d-f925d43f05cc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t67nv" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.608629 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73283b30-a00c-44f0-95a1-48257ac6ae48-config\") pod \"service-ca-operator-777779d784-k2svc\" (UID: \"73283b30-a00c-44f0-95a1-48257ac6ae48\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k2svc" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.608646 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsn62\" (UniqueName: \"kubernetes.io/projected/4e363f08-f4c4-4bdd-ac61-18f22f0ccc3d-kube-api-access-jsn62\") pod \"packageserver-d55dfcdfc-6jdtp\" (UID: \"4e363f08-f4c4-4bdd-ac61-18f22f0ccc3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jdtp" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.608676 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/122bf0d5-ee1a-417a-ae58-ced70bfe2abf-config-volume\") pod \"dns-default-8k974\" (UID: \"122bf0d5-ee1a-417a-ae58-ced70bfe2abf\") " pod="openshift-dns/dns-default-8k974" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.608691 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/87b07f24-92c0-4190-a140-6029e82f826d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pn8z7\" (UID: \"87b07f24-92c0-4190-a140-6029e82f826d\") " pod="openshift-marketplace/marketplace-operator-79b997595-pn8z7" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.608713 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0026923c-f271-4a2c-864b-71f051a5f093-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5fqdl\" (UID: \"0026923c-f271-4a2c-864b-71f051a5f093\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fqdl" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.608860 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0f3177c-80c0-4074-adef-91e20ece02b8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4qxb4\" (UID: \"c0f3177c-80c0-4074-adef-91e20ece02b8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4qxb4" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.608893 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b62915f7-c2aa-468c-9cba-05ee4712a811-auth-proxy-config\") pod \"machine-config-operator-74547568cd-twm6j\" (UID: \"b62915f7-c2aa-468c-9cba-05ee4712a811\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twm6j" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.608909 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/424db312-75a1-4a4e-954d-4100001cb1ef-metrics-tls\") pod \"ingress-operator-5b745b69d9-ftrqq\" (UID: \"424db312-75a1-4a4e-954d-4100001cb1ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ftrqq" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.608958 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3865e56a-bf16-4cd0-b7c1-f74e98381c5b-service-ca-bundle\") pod \"router-default-5444994796-snrcm\" (UID: \"3865e56a-bf16-4cd0-b7c1-f74e98381c5b\") " pod="openshift-ingress/router-default-5444994796-snrcm" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.609015 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b62915f7-c2aa-468c-9cba-05ee4712a811-proxy-tls\") pod \"machine-config-operator-74547568cd-twm6j\" (UID: \"b62915f7-c2aa-468c-9cba-05ee4712a811\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twm6j" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.609045 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qtdv\" (UniqueName: \"kubernetes.io/projected/87b07f24-92c0-4190-a140-6029e82f826d-kube-api-access-6qtdv\") pod \"marketplace-operator-79b997595-pn8z7\" (UID: \"87b07f24-92c0-4190-a140-6029e82f826d\") " pod="openshift-marketplace/marketplace-operator-79b997595-pn8z7" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.609128 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk6vg\" (UniqueName: \"kubernetes.io/projected/da51f450-5541-4a98-85e4-f9ce8e81fc1a-kube-api-access-lk6vg\") pod \"olm-operator-6b444d44fb-xg2bk\" (UID: \"da51f450-5541-4a98-85e4-f9ce8e81fc1a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xg2bk" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.609161 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4e363f08-f4c4-4bdd-ac61-18f22f0ccc3d-tmpfs\") pod \"packageserver-d55dfcdfc-6jdtp\" (UID: \"4e363f08-f4c4-4bdd-ac61-18f22f0ccc3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jdtp" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.609344 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-trusted-ca\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.609482 4619 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-bctm2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.609519 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bctm2" podUID="41dc8f80-5742-4e4b-943e-571ad0e59027" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.609428 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7fd097cd-fa31-45f4-bd1d-f925d43f05cc-proxy-tls\") pod \"machine-config-controller-84d6567774-t67nv\" (UID: \"7fd097cd-fa31-45f4-bd1d-f925d43f05cc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t67nv" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.612379 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6431707b-d1e3-449c-a20a-2b4906880ad2-signing-key\") pod \"service-ca-9c57cc56f-vpbzh\" (UID: \"6431707b-d1e3-449c-a20a-2b4906880ad2\") " pod="openshift-service-ca/service-ca-9c57cc56f-vpbzh" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.614823 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-trusted-ca\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.614877 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9rbp\" (UniqueName: \"kubernetes.io/projected/28521f3a-f67f-461b-b56d-d8d023bebde2-kube-api-access-g9rbp\") pod \"etcd-operator-b45778765-98w5w\" (UID: \"28521f3a-f67f-461b-b56d-d8d023bebde2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-98w5w" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.614909 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e363f08-f4c4-4bdd-ac61-18f22f0ccc3d-webhook-cert\") pod \"packageserver-d55dfcdfc-6jdtp\" (UID: \"4e363f08-f4c4-4bdd-ac61-18f22f0ccc3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jdtp" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.614931 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87b07f24-92c0-4190-a140-6029e82f826d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pn8z7\" (UID: \"87b07f24-92c0-4190-a140-6029e82f826d\") " pod="openshift-marketplace/marketplace-operator-79b997595-pn8z7" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.614953 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7e21dce7-f265-4965-b02a-64ca62ff71b4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kb2jr\" (UID: \"7e21dce7-f265-4965-b02a-64ca62ff71b4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kb2jr" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.614977 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28521f3a-f67f-461b-b56d-d8d023bebde2-config\") pod \"etcd-operator-b45778765-98w5w\" (UID: \"28521f3a-f67f-461b-b56d-d8d023bebde2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-98w5w" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.614999 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt5qq\" (UniqueName: \"kubernetes.io/projected/7e21dce7-f265-4965-b02a-64ca62ff71b4-kube-api-access-gt5qq\") pod \"multus-admission-controller-857f4d67dd-kb2jr\" (UID: \"7e21dce7-f265-4965-b02a-64ca62ff71b4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kb2jr" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.615035 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4glq\" (UniqueName: \"kubernetes.io/projected/01f7dbf9-8d49-4dca-9285-221957d8df27-kube-api-access-j4glq\") pod \"ingress-canary-t2xxs\" (UID: \"01f7dbf9-8d49-4dca-9285-221957d8df27\") " pod="openshift-ingress-canary/ingress-canary-t2xxs" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.615087 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.615111 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/28521f3a-f67f-461b-b56d-d8d023bebde2-etcd-ca\") pod \"etcd-operator-b45778765-98w5w\" (UID: \"28521f3a-f67f-461b-b56d-d8d023bebde2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-98w5w" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.615133 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxq7f\" (UniqueName: \"kubernetes.io/projected/73283b30-a00c-44f0-95a1-48257ac6ae48-kube-api-access-jxq7f\") pod \"service-ca-operator-777779d784-k2svc\" (UID: \"73283b30-a00c-44f0-95a1-48257ac6ae48\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k2svc" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.615188 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73283b30-a00c-44f0-95a1-48257ac6ae48-serving-cert\") pod \"service-ca-operator-777779d784-k2svc\" (UID: \"73283b30-a00c-44f0-95a1-48257ac6ae48\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k2svc" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.615211 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z64vv\" (UniqueName: \"kubernetes.io/projected/c7a17032-bac8-4534-8671-b0e660c7c685-kube-api-access-z64vv\") pod \"package-server-manager-789f6589d5-shltb\" (UID: \"c7a17032-bac8-4534-8671-b0e660c7c685\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-shltb" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.615272 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3865e56a-bf16-4cd0-b7c1-f74e98381c5b-stats-auth\") pod \"router-default-5444994796-snrcm\" (UID: \"3865e56a-bf16-4cd0-b7c1-f74e98381c5b\") " pod="openshift-ingress/router-default-5444994796-snrcm" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.615295 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f-config\") pod \"authentication-operator-69f744f599-jtkbh\" (UID: \"3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jtkbh" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.615315 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7llzt\" (UniqueName: \"kubernetes.io/projected/6431707b-d1e3-449c-a20a-2b4906880ad2-kube-api-access-7llzt\") pod \"service-ca-9c57cc56f-vpbzh\" (UID: \"6431707b-d1e3-449c-a20a-2b4906880ad2\") " pod="openshift-service-ca/service-ca-9c57cc56f-vpbzh" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.615350 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0f3177c-80c0-4074-adef-91e20ece02b8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4qxb4\" (UID: \"c0f3177c-80c0-4074-adef-91e20ece02b8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4qxb4" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.615371 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s24kc\" (UniqueName: \"kubernetes.io/projected/b62915f7-c2aa-468c-9cba-05ee4712a811-kube-api-access-s24kc\") pod \"machine-config-operator-74547568cd-twm6j\" (UID: \"b62915f7-c2aa-468c-9cba-05ee4712a811\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twm6j" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.615396 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.615425 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28521f3a-f67f-461b-b56d-d8d023bebde2-serving-cert\") pod \"etcd-operator-b45778765-98w5w\" (UID: \"28521f3a-f67f-461b-b56d-d8d023bebde2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-98w5w" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.615449 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-registry-certificates\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618044 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618109 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/332f48dc-fb71-4e6e-bb33-c416af7b743b-secret-volume\") pod \"collect-profiles-29490405-2tm2l\" (UID: \"332f48dc-fb71-4e6e-bb33-c416af7b743b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490405-2tm2l" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618159 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x9t8\" (UniqueName: \"kubernetes.io/projected/9a275d95-f049-409f-aa77-8ed837840461-kube-api-access-7x9t8\") pod \"kube-storage-version-migrator-operator-b67b599dd-5mlf9\" (UID: \"9a275d95-f049-409f-aa77-8ed837840461\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5mlf9" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618186 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqpx7\" (UniqueName: \"kubernetes.io/projected/a6ca7682-4544-4745-a71c-0f5b051d1f59-kube-api-access-kqpx7\") pod \"catalog-operator-68c6474976-qsq5c\" (UID: \"a6ca7682-4544-4745-a71c-0f5b051d1f59\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsq5c" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618213 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/679cfe9e-496a-4d70-bfdf-7401ea224c8b-socket-dir\") pod \"csi-hostpathplugin-ff7zj\" (UID: \"679cfe9e-496a-4d70-bfdf-7401ea224c8b\") " pod="hostpath-provisioner/csi-hostpathplugin-ff7zj" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618237 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nb87\" (UniqueName: \"kubernetes.io/projected/3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f-kube-api-access-5nb87\") pod \"authentication-operator-69f744f599-jtkbh\" (UID: \"3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jtkbh" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618257 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/424db312-75a1-4a4e-954d-4100001cb1ef-trusted-ca\") pod \"ingress-operator-5b745b69d9-ftrqq\" (UID: \"424db312-75a1-4a4e-954d-4100001cb1ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ftrqq" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618280 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2b01dc83-f1cc-447a-9b3c-3b9762465220-node-bootstrap-token\") pod \"machine-config-server-xr254\" (UID: \"2b01dc83-f1cc-447a-9b3c-3b9762465220\") " pod="openshift-machine-config-operator/machine-config-server-xr254" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618304 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfvvk\" (UniqueName: \"kubernetes.io/projected/27a11a49-1783-4016-a84f-25b5c4eb9584-kube-api-access-tfvvk\") pod \"console-operator-58897d9998-6xg9r\" (UID: \"27a11a49-1783-4016-a84f-25b5c4eb9584\") " pod="openshift-console-operator/console-operator-58897d9998-6xg9r" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618324 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a6ca7682-4544-4745-a71c-0f5b051d1f59-srv-cert\") pod \"catalog-operator-68c6474976-qsq5c\" (UID: \"a6ca7682-4544-4745-a71c-0f5b051d1f59\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsq5c" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618371 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f-service-ca-bundle\") pod \"authentication-operator-69f744f599-jtkbh\" (UID: \"3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jtkbh" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618397 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca8acf28-fab4-4031-8106-3a1a99f7717f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-vv9kd\" (UID: \"ca8acf28-fab4-4031-8106-3a1a99f7717f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vv9kd" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618418 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/122bf0d5-ee1a-417a-ae58-ced70bfe2abf-metrics-tls\") pod \"dns-default-8k974\" (UID: \"122bf0d5-ee1a-417a-ae58-ced70bfe2abf\") " pod="openshift-dns/dns-default-8k974" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618444 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7468d5f2-6ad2-480c-bad5-4bc64f6cbaab-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8qrcl\" (UID: \"7468d5f2-6ad2-480c-bad5-4bc64f6cbaab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8qrcl" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618483 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-bound-sa-token\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618508 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/696254c7-95ab-43d9-9919-5d1146eec08e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-l54p9\" (UID: \"696254c7-95ab-43d9-9919-5d1146eec08e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l54p9" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618536 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgn8c\" (UniqueName: \"kubernetes.io/projected/679cfe9e-496a-4d70-bfdf-7401ea224c8b-kube-api-access-hgn8c\") pod \"csi-hostpathplugin-ff7zj\" (UID: \"679cfe9e-496a-4d70-bfdf-7401ea224c8b\") " pod="hostpath-provisioner/csi-hostpathplugin-ff7zj" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618555 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7468d5f2-6ad2-480c-bad5-4bc64f6cbaab-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8qrcl\" (UID: \"7468d5f2-6ad2-480c-bad5-4bc64f6cbaab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8qrcl" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618570 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jtkbh\" (UID: \"3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jtkbh" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618592 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618607 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/332f48dc-fb71-4e6e-bb33-c416af7b743b-config-volume\") pod \"collect-profiles-29490405-2tm2l\" (UID: \"332f48dc-fb71-4e6e-bb33-c416af7b743b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490405-2tm2l" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618640 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c0f3177c-80c0-4074-adef-91e20ece02b8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4qxb4\" (UID: \"c0f3177c-80c0-4074-adef-91e20ece02b8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4qxb4" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618663 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6431707b-d1e3-449c-a20a-2b4906880ad2-signing-cabundle\") pod \"service-ca-9c57cc56f-vpbzh\" (UID: \"6431707b-d1e3-449c-a20a-2b4906880ad2\") " pod="openshift-service-ca/service-ca-9c57cc56f-vpbzh" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618678 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a6ca7682-4544-4745-a71c-0f5b051d1f59-profile-collector-cert\") pod \"catalog-operator-68c6474976-qsq5c\" (UID: \"a6ca7682-4544-4745-a71c-0f5b051d1f59\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsq5c" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618695 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqdkh\" (UniqueName: \"kubernetes.io/projected/332f48dc-fb71-4e6e-bb33-c416af7b743b-kube-api-access-kqdkh\") pod \"collect-profiles-29490405-2tm2l\" (UID: \"332f48dc-fb71-4e6e-bb33-c416af7b743b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490405-2tm2l" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618716 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6ztx\" (UniqueName: \"kubernetes.io/projected/7fd097cd-fa31-45f4-bd1d-f925d43f05cc-kube-api-access-m6ztx\") pod \"machine-config-controller-84d6567774-t67nv\" (UID: \"7fd097cd-fa31-45f4-bd1d-f925d43f05cc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t67nv" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618735 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv594\" (UniqueName: \"kubernetes.io/projected/08d7d14b-8c4c-4bb6-99d4-71618136533b-kube-api-access-cv594\") pod \"migrator-59844c95c7-z2hch\" (UID: \"08d7d14b-8c4c-4bb6-99d4-71618136533b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z2hch" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618757 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e363f08-f4c4-4bdd-ac61-18f22f0ccc3d-apiservice-cert\") pod \"packageserver-d55dfcdfc-6jdtp\" (UID: \"4e363f08-f4c4-4bdd-ac61-18f22f0ccc3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jdtp" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618774 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6scv\" (UniqueName: \"kubernetes.io/projected/122bf0d5-ee1a-417a-ae58-ced70bfe2abf-kube-api-access-f6scv\") pod \"dns-default-8k974\" (UID: \"122bf0d5-ee1a-417a-ae58-ced70bfe2abf\") " pod="openshift-dns/dns-default-8k974" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618794 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c7a17032-bac8-4534-8671-b0e660c7c685-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-shltb\" (UID: \"c7a17032-bac8-4534-8671-b0e660c7c685\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-shltb" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618813 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3865e56a-bf16-4cd0-b7c1-f74e98381c5b-default-certificate\") pod \"router-default-5444994796-snrcm\" (UID: \"3865e56a-bf16-4cd0-b7c1-f74e98381c5b\") " pod="openshift-ingress/router-default-5444994796-snrcm" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618841 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/679cfe9e-496a-4d70-bfdf-7401ea224c8b-registration-dir\") pod \"csi-hostpathplugin-ff7zj\" (UID: \"679cfe9e-496a-4d70-bfdf-7401ea224c8b\") " pod="hostpath-provisioner/csi-hostpathplugin-ff7zj" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618859 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f-serving-cert\") pod \"authentication-operator-69f744f599-jtkbh\" (UID: \"3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jtkbh" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618876 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27a11a49-1783-4016-a84f-25b5c4eb9584-trusted-ca\") pod \"console-operator-58897d9998-6xg9r\" (UID: \"27a11a49-1783-4016-a84f-25b5c4eb9584\") " pod="openshift-console-operator/console-operator-58897d9998-6xg9r" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618893 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27a11a49-1783-4016-a84f-25b5c4eb9584-config\") pod \"console-operator-58897d9998-6xg9r\" (UID: \"27a11a49-1783-4016-a84f-25b5c4eb9584\") " pod="openshift-console-operator/console-operator-58897d9998-6xg9r" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618908 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca8acf28-fab4-4031-8106-3a1a99f7717f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-vv9kd\" (UID: \"ca8acf28-fab4-4031-8106-3a1a99f7717f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vv9kd" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618925 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zww6h\" (UniqueName: \"kubernetes.io/projected/2b01dc83-f1cc-447a-9b3c-3b9762465220-kube-api-access-zww6h\") pod \"machine-config-server-xr254\" (UID: \"2b01dc83-f1cc-447a-9b3c-3b9762465220\") " pod="openshift-machine-config-operator/machine-config-server-xr254" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618940 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/679cfe9e-496a-4d70-bfdf-7401ea224c8b-csi-data-dir\") pod \"csi-hostpathplugin-ff7zj\" (UID: \"679cfe9e-496a-4d70-bfdf-7401ea224c8b\") " pod="hostpath-provisioner/csi-hostpathplugin-ff7zj" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618972 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3865e56a-bf16-4cd0-b7c1-f74e98381c5b-metrics-certs\") pod \"router-default-5444994796-snrcm\" (UID: \"3865e56a-bf16-4cd0-b7c1-f74e98381c5b\") " pod="openshift-ingress/router-default-5444994796-snrcm" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.618985 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2b01dc83-f1cc-447a-9b3c-3b9762465220-certs\") pod \"machine-config-server-xr254\" (UID: \"2b01dc83-f1cc-447a-9b3c-3b9762465220\") " pod="openshift-machine-config-operator/machine-config-server-xr254" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.619000 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7468d5f2-6ad2-480c-bad5-4bc64f6cbaab-config\") pod \"kube-controller-manager-operator-78b949d7b-8qrcl\" (UID: \"7468d5f2-6ad2-480c-bad5-4bc64f6cbaab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8qrcl" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.619033 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjmc9\" (UniqueName: \"kubernetes.io/projected/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-kube-api-access-tjmc9\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.619049 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmwf5\" (UniqueName: \"kubernetes.io/projected/0026923c-f271-4a2c-864b-71f051a5f093-kube-api-access-zmwf5\") pod \"cluster-samples-operator-665b6dd947-5fqdl\" (UID: \"0026923c-f271-4a2c-864b-71f051a5f093\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fqdl" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.619070 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27a11a49-1783-4016-a84f-25b5c4eb9584-serving-cert\") pod \"console-operator-58897d9998-6xg9r\" (UID: \"27a11a49-1783-4016-a84f-25b5c4eb9584\") " pod="openshift-console-operator/console-operator-58897d9998-6xg9r" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.619099 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a275d95-f049-409f-aa77-8ed837840461-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5mlf9\" (UID: \"9a275d95-f049-409f-aa77-8ed837840461\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5mlf9" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.619115 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/28521f3a-f67f-461b-b56d-d8d023bebde2-etcd-client\") pod \"etcd-operator-b45778765-98w5w\" (UID: \"28521f3a-f67f-461b-b56d-d8d023bebde2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-98w5w" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.619133 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-registry-tls\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.619148 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/01f7dbf9-8d49-4dca-9285-221957d8df27-cert\") pod \"ingress-canary-t2xxs\" (UID: \"01f7dbf9-8d49-4dca-9285-221957d8df27\") " pod="openshift-ingress-canary/ingress-canary-t2xxs" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.619172 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/679cfe9e-496a-4d70-bfdf-7401ea224c8b-mountpoint-dir\") pod \"csi-hostpathplugin-ff7zj\" (UID: \"679cfe9e-496a-4d70-bfdf-7401ea224c8b\") " pod="hostpath-provisioner/csi-hostpathplugin-ff7zj" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.619189 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbfh8\" (UniqueName: \"kubernetes.io/projected/424db312-75a1-4a4e-954d-4100001cb1ef-kube-api-access-sbfh8\") pod \"ingress-operator-5b745b69d9-ftrqq\" (UID: \"424db312-75a1-4a4e-954d-4100001cb1ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ftrqq" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.619205 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmmn4\" (UniqueName: \"kubernetes.io/projected/696254c7-95ab-43d9-9919-5d1146eec08e-kube-api-access-vmmn4\") pod \"control-plane-machine-set-operator-78cbb6b69f-l54p9\" (UID: \"696254c7-95ab-43d9-9919-5d1146eec08e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l54p9" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.619219 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/679cfe9e-496a-4d70-bfdf-7401ea224c8b-plugins-dir\") pod \"csi-hostpathplugin-ff7zj\" (UID: \"679cfe9e-496a-4d70-bfdf-7401ea224c8b\") " pod="hostpath-provisioner/csi-hostpathplugin-ff7zj" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.619238 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nptd\" (UniqueName: \"kubernetes.io/projected/ca8acf28-fab4-4031-8106-3a1a99f7717f-kube-api-access-6nptd\") pod \"openshift-controller-manager-operator-756b6f6bc6-vv9kd\" (UID: \"ca8acf28-fab4-4031-8106-3a1a99f7717f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vv9kd" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.619254 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/28521f3a-f67f-461b-b56d-d8d023bebde2-etcd-service-ca\") pod \"etcd-operator-b45778765-98w5w\" (UID: \"28521f3a-f67f-461b-b56d-d8d023bebde2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-98w5w" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.619269 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/da51f450-5541-4a98-85e4-f9ce8e81fc1a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xg2bk\" (UID: \"da51f450-5541-4a98-85e4-f9ce8e81fc1a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xg2bk" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.619283 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b62915f7-c2aa-468c-9cba-05ee4712a811-images\") pod \"machine-config-operator-74547568cd-twm6j\" (UID: \"b62915f7-c2aa-468c-9cba-05ee4712a811\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twm6j" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.619300 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a275d95-f049-409f-aa77-8ed837840461-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5mlf9\" (UID: \"9a275d95-f049-409f-aa77-8ed837840461\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5mlf9" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.619316 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/da51f450-5541-4a98-85e4-f9ce8e81fc1a-srv-cert\") pod \"olm-operator-6b444d44fb-xg2bk\" (UID: \"da51f450-5541-4a98-85e4-f9ce8e81fc1a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xg2bk" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.619324 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-registry-certificates\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:28 crc kubenswrapper[4619]: E0126 10:57:28.620760 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:29.120745932 +0000 UTC m=+148.154786648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.632160 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-registry-tls\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.641251 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.661605 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-bound-sa-token\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.697840 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjmc9\" (UniqueName: \"kubernetes.io/projected/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-kube-api-access-tjmc9\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.721944 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722196 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7468d5f2-6ad2-480c-bad5-4bc64f6cbaab-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8qrcl\" (UID: \"7468d5f2-6ad2-480c-bad5-4bc64f6cbaab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8qrcl" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722226 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jtkbh\" (UID: \"3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jtkbh" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722252 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/696254c7-95ab-43d9-9919-5d1146eec08e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-l54p9\" (UID: \"696254c7-95ab-43d9-9919-5d1146eec08e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l54p9" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722277 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgn8c\" (UniqueName: \"kubernetes.io/projected/679cfe9e-496a-4d70-bfdf-7401ea224c8b-kube-api-access-hgn8c\") pod \"csi-hostpathplugin-ff7zj\" (UID: \"679cfe9e-496a-4d70-bfdf-7401ea224c8b\") " pod="hostpath-provisioner/csi-hostpathplugin-ff7zj" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722309 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/332f48dc-fb71-4e6e-bb33-c416af7b743b-config-volume\") pod \"collect-profiles-29490405-2tm2l\" (UID: \"332f48dc-fb71-4e6e-bb33-c416af7b743b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490405-2tm2l" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722332 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c0f3177c-80c0-4074-adef-91e20ece02b8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4qxb4\" (UID: \"c0f3177c-80c0-4074-adef-91e20ece02b8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4qxb4" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722350 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6431707b-d1e3-449c-a20a-2b4906880ad2-signing-cabundle\") pod \"service-ca-9c57cc56f-vpbzh\" (UID: \"6431707b-d1e3-449c-a20a-2b4906880ad2\") " pod="openshift-service-ca/service-ca-9c57cc56f-vpbzh" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722367 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a6ca7682-4544-4745-a71c-0f5b051d1f59-profile-collector-cert\") pod \"catalog-operator-68c6474976-qsq5c\" (UID: \"a6ca7682-4544-4745-a71c-0f5b051d1f59\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsq5c" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722386 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqdkh\" (UniqueName: \"kubernetes.io/projected/332f48dc-fb71-4e6e-bb33-c416af7b743b-kube-api-access-kqdkh\") pod \"collect-profiles-29490405-2tm2l\" (UID: \"332f48dc-fb71-4e6e-bb33-c416af7b743b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490405-2tm2l" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722404 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6ztx\" (UniqueName: \"kubernetes.io/projected/7fd097cd-fa31-45f4-bd1d-f925d43f05cc-kube-api-access-m6ztx\") pod \"machine-config-controller-84d6567774-t67nv\" (UID: \"7fd097cd-fa31-45f4-bd1d-f925d43f05cc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t67nv" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722424 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv594\" (UniqueName: \"kubernetes.io/projected/08d7d14b-8c4c-4bb6-99d4-71618136533b-kube-api-access-cv594\") pod \"migrator-59844c95c7-z2hch\" (UID: \"08d7d14b-8c4c-4bb6-99d4-71618136533b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z2hch" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722440 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e363f08-f4c4-4bdd-ac61-18f22f0ccc3d-apiservice-cert\") pod \"packageserver-d55dfcdfc-6jdtp\" (UID: \"4e363f08-f4c4-4bdd-ac61-18f22f0ccc3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jdtp" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722459 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6scv\" (UniqueName: \"kubernetes.io/projected/122bf0d5-ee1a-417a-ae58-ced70bfe2abf-kube-api-access-f6scv\") pod \"dns-default-8k974\" (UID: \"122bf0d5-ee1a-417a-ae58-ced70bfe2abf\") " pod="openshift-dns/dns-default-8k974" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722481 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c7a17032-bac8-4534-8671-b0e660c7c685-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-shltb\" (UID: \"c7a17032-bac8-4534-8671-b0e660c7c685\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-shltb" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722499 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3865e56a-bf16-4cd0-b7c1-f74e98381c5b-default-certificate\") pod \"router-default-5444994796-snrcm\" (UID: \"3865e56a-bf16-4cd0-b7c1-f74e98381c5b\") " pod="openshift-ingress/router-default-5444994796-snrcm" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722515 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/679cfe9e-496a-4d70-bfdf-7401ea224c8b-registration-dir\") pod \"csi-hostpathplugin-ff7zj\" (UID: \"679cfe9e-496a-4d70-bfdf-7401ea224c8b\") " pod="hostpath-provisioner/csi-hostpathplugin-ff7zj" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722531 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f-serving-cert\") pod \"authentication-operator-69f744f599-jtkbh\" (UID: \"3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jtkbh" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722548 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27a11a49-1783-4016-a84f-25b5c4eb9584-trusted-ca\") pod \"console-operator-58897d9998-6xg9r\" (UID: \"27a11a49-1783-4016-a84f-25b5c4eb9584\") " pod="openshift-console-operator/console-operator-58897d9998-6xg9r" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722569 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27a11a49-1783-4016-a84f-25b5c4eb9584-config\") pod \"console-operator-58897d9998-6xg9r\" (UID: \"27a11a49-1783-4016-a84f-25b5c4eb9584\") " pod="openshift-console-operator/console-operator-58897d9998-6xg9r" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722592 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca8acf28-fab4-4031-8106-3a1a99f7717f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-vv9kd\" (UID: \"ca8acf28-fab4-4031-8106-3a1a99f7717f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vv9kd" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722636 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3865e56a-bf16-4cd0-b7c1-f74e98381c5b-metrics-certs\") pod \"router-default-5444994796-snrcm\" (UID: \"3865e56a-bf16-4cd0-b7c1-f74e98381c5b\") " pod="openshift-ingress/router-default-5444994796-snrcm" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722670 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2b01dc83-f1cc-447a-9b3c-3b9762465220-certs\") pod \"machine-config-server-xr254\" (UID: \"2b01dc83-f1cc-447a-9b3c-3b9762465220\") " pod="openshift-machine-config-operator/machine-config-server-xr254" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722690 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zww6h\" (UniqueName: \"kubernetes.io/projected/2b01dc83-f1cc-447a-9b3c-3b9762465220-kube-api-access-zww6h\") pod \"machine-config-server-xr254\" (UID: \"2b01dc83-f1cc-447a-9b3c-3b9762465220\") " pod="openshift-machine-config-operator/machine-config-server-xr254" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722707 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/679cfe9e-496a-4d70-bfdf-7401ea224c8b-csi-data-dir\") pod \"csi-hostpathplugin-ff7zj\" (UID: \"679cfe9e-496a-4d70-bfdf-7401ea224c8b\") " pod="hostpath-provisioner/csi-hostpathplugin-ff7zj" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722730 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7468d5f2-6ad2-480c-bad5-4bc64f6cbaab-config\") pod \"kube-controller-manager-operator-78b949d7b-8qrcl\" (UID: \"7468d5f2-6ad2-480c-bad5-4bc64f6cbaab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8qrcl" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722751 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmwf5\" (UniqueName: \"kubernetes.io/projected/0026923c-f271-4a2c-864b-71f051a5f093-kube-api-access-zmwf5\") pod \"cluster-samples-operator-665b6dd947-5fqdl\" (UID: \"0026923c-f271-4a2c-864b-71f051a5f093\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fqdl" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722772 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27a11a49-1783-4016-a84f-25b5c4eb9584-serving-cert\") pod \"console-operator-58897d9998-6xg9r\" (UID: \"27a11a49-1783-4016-a84f-25b5c4eb9584\") " pod="openshift-console-operator/console-operator-58897d9998-6xg9r" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722795 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a275d95-f049-409f-aa77-8ed837840461-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5mlf9\" (UID: \"9a275d95-f049-409f-aa77-8ed837840461\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5mlf9" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722814 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/28521f3a-f67f-461b-b56d-d8d023bebde2-etcd-client\") pod \"etcd-operator-b45778765-98w5w\" (UID: \"28521f3a-f67f-461b-b56d-d8d023bebde2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-98w5w" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722833 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/01f7dbf9-8d49-4dca-9285-221957d8df27-cert\") pod \"ingress-canary-t2xxs\" (UID: \"01f7dbf9-8d49-4dca-9285-221957d8df27\") " pod="openshift-ingress-canary/ingress-canary-t2xxs" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722852 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/679cfe9e-496a-4d70-bfdf-7401ea224c8b-mountpoint-dir\") pod \"csi-hostpathplugin-ff7zj\" (UID: \"679cfe9e-496a-4d70-bfdf-7401ea224c8b\") " pod="hostpath-provisioner/csi-hostpathplugin-ff7zj" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722871 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbfh8\" (UniqueName: \"kubernetes.io/projected/424db312-75a1-4a4e-954d-4100001cb1ef-kube-api-access-sbfh8\") pod \"ingress-operator-5b745b69d9-ftrqq\" (UID: \"424db312-75a1-4a4e-954d-4100001cb1ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ftrqq" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722889 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmmn4\" (UniqueName: \"kubernetes.io/projected/696254c7-95ab-43d9-9919-5d1146eec08e-kube-api-access-vmmn4\") pod \"control-plane-machine-set-operator-78cbb6b69f-l54p9\" (UID: \"696254c7-95ab-43d9-9919-5d1146eec08e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l54p9" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722909 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/679cfe9e-496a-4d70-bfdf-7401ea224c8b-plugins-dir\") pod \"csi-hostpathplugin-ff7zj\" (UID: \"679cfe9e-496a-4d70-bfdf-7401ea224c8b\") " pod="hostpath-provisioner/csi-hostpathplugin-ff7zj" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722930 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nptd\" (UniqueName: \"kubernetes.io/projected/ca8acf28-fab4-4031-8106-3a1a99f7717f-kube-api-access-6nptd\") pod \"openshift-controller-manager-operator-756b6f6bc6-vv9kd\" (UID: \"ca8acf28-fab4-4031-8106-3a1a99f7717f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vv9kd" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722948 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/28521f3a-f67f-461b-b56d-d8d023bebde2-etcd-service-ca\") pod \"etcd-operator-b45778765-98w5w\" (UID: \"28521f3a-f67f-461b-b56d-d8d023bebde2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-98w5w" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722968 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/da51f450-5541-4a98-85e4-f9ce8e81fc1a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xg2bk\" (UID: \"da51f450-5541-4a98-85e4-f9ce8e81fc1a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xg2bk" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.722989 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a275d95-f049-409f-aa77-8ed837840461-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5mlf9\" (UID: \"9a275d95-f049-409f-aa77-8ed837840461\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5mlf9" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723007 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b62915f7-c2aa-468c-9cba-05ee4712a811-images\") pod \"machine-config-operator-74547568cd-twm6j\" (UID: \"b62915f7-c2aa-468c-9cba-05ee4712a811\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twm6j" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723026 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/da51f450-5541-4a98-85e4-f9ce8e81fc1a-srv-cert\") pod \"olm-operator-6b444d44fb-xg2bk\" (UID: \"da51f450-5541-4a98-85e4-f9ce8e81fc1a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xg2bk" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723046 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/424db312-75a1-4a4e-954d-4100001cb1ef-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ftrqq\" (UID: \"424db312-75a1-4a4e-954d-4100001cb1ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ftrqq" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723064 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73283b30-a00c-44f0-95a1-48257ac6ae48-config\") pod \"service-ca-operator-777779d784-k2svc\" (UID: \"73283b30-a00c-44f0-95a1-48257ac6ae48\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k2svc" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723081 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsn62\" (UniqueName: \"kubernetes.io/projected/4e363f08-f4c4-4bdd-ac61-18f22f0ccc3d-kube-api-access-jsn62\") pod \"packageserver-d55dfcdfc-6jdtp\" (UID: \"4e363f08-f4c4-4bdd-ac61-18f22f0ccc3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jdtp" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723100 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvs6f\" (UniqueName: \"kubernetes.io/projected/3865e56a-bf16-4cd0-b7c1-f74e98381c5b-kube-api-access-jvs6f\") pod \"router-default-5444994796-snrcm\" (UID: \"3865e56a-bf16-4cd0-b7c1-f74e98381c5b\") " pod="openshift-ingress/router-default-5444994796-snrcm" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723120 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7fd097cd-fa31-45f4-bd1d-f925d43f05cc-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-t67nv\" (UID: \"7fd097cd-fa31-45f4-bd1d-f925d43f05cc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t67nv" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723139 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/122bf0d5-ee1a-417a-ae58-ced70bfe2abf-config-volume\") pod \"dns-default-8k974\" (UID: \"122bf0d5-ee1a-417a-ae58-ced70bfe2abf\") " pod="openshift-dns/dns-default-8k974" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723163 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0026923c-f271-4a2c-864b-71f051a5f093-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5fqdl\" (UID: \"0026923c-f271-4a2c-864b-71f051a5f093\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fqdl" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723182 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/87b07f24-92c0-4190-a140-6029e82f826d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pn8z7\" (UID: \"87b07f24-92c0-4190-a140-6029e82f826d\") " pod="openshift-marketplace/marketplace-operator-79b997595-pn8z7" Jan 26 10:57:28 crc kubenswrapper[4619]: E0126 10:57:28.723223 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:29.223191999 +0000 UTC m=+148.257232715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723365 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/332f48dc-fb71-4e6e-bb33-c416af7b743b-config-volume\") pod \"collect-profiles-29490405-2tm2l\" (UID: \"332f48dc-fb71-4e6e-bb33-c416af7b743b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490405-2tm2l" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723381 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0f3177c-80c0-4074-adef-91e20ece02b8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4qxb4\" (UID: \"c0f3177c-80c0-4074-adef-91e20ece02b8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4qxb4" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723443 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b62915f7-c2aa-468c-9cba-05ee4712a811-auth-proxy-config\") pod \"machine-config-operator-74547568cd-twm6j\" (UID: \"b62915f7-c2aa-468c-9cba-05ee4712a811\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twm6j" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723475 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/424db312-75a1-4a4e-954d-4100001cb1ef-metrics-tls\") pod \"ingress-operator-5b745b69d9-ftrqq\" (UID: \"424db312-75a1-4a4e-954d-4100001cb1ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ftrqq" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723498 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qtdv\" (UniqueName: \"kubernetes.io/projected/87b07f24-92c0-4190-a140-6029e82f826d-kube-api-access-6qtdv\") pod \"marketplace-operator-79b997595-pn8z7\" (UID: \"87b07f24-92c0-4190-a140-6029e82f826d\") " pod="openshift-marketplace/marketplace-operator-79b997595-pn8z7" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723522 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3865e56a-bf16-4cd0-b7c1-f74e98381c5b-service-ca-bundle\") pod \"router-default-5444994796-snrcm\" (UID: \"3865e56a-bf16-4cd0-b7c1-f74e98381c5b\") " pod="openshift-ingress/router-default-5444994796-snrcm" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723548 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b62915f7-c2aa-468c-9cba-05ee4712a811-proxy-tls\") pod \"machine-config-operator-74547568cd-twm6j\" (UID: \"b62915f7-c2aa-468c-9cba-05ee4712a811\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twm6j" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723579 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk6vg\" (UniqueName: \"kubernetes.io/projected/da51f450-5541-4a98-85e4-f9ce8e81fc1a-kube-api-access-lk6vg\") pod \"olm-operator-6b444d44fb-xg2bk\" (UID: \"da51f450-5541-4a98-85e4-f9ce8e81fc1a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xg2bk" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723601 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4e363f08-f4c4-4bdd-ac61-18f22f0ccc3d-tmpfs\") pod \"packageserver-d55dfcdfc-6jdtp\" (UID: \"4e363f08-f4c4-4bdd-ac61-18f22f0ccc3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jdtp" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723640 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7fd097cd-fa31-45f4-bd1d-f925d43f05cc-proxy-tls\") pod \"machine-config-controller-84d6567774-t67nv\" (UID: \"7fd097cd-fa31-45f4-bd1d-f925d43f05cc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t67nv" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723661 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6431707b-d1e3-449c-a20a-2b4906880ad2-signing-key\") pod \"service-ca-9c57cc56f-vpbzh\" (UID: \"6431707b-d1e3-449c-a20a-2b4906880ad2\") " pod="openshift-service-ca/service-ca-9c57cc56f-vpbzh" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723691 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87b07f24-92c0-4190-a140-6029e82f826d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pn8z7\" (UID: \"87b07f24-92c0-4190-a140-6029e82f826d\") " pod="openshift-marketplace/marketplace-operator-79b997595-pn8z7" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723709 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7e21dce7-f265-4965-b02a-64ca62ff71b4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kb2jr\" (UID: \"7e21dce7-f265-4965-b02a-64ca62ff71b4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kb2jr" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723734 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28521f3a-f67f-461b-b56d-d8d023bebde2-config\") pod \"etcd-operator-b45778765-98w5w\" (UID: \"28521f3a-f67f-461b-b56d-d8d023bebde2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-98w5w" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723752 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9rbp\" (UniqueName: \"kubernetes.io/projected/28521f3a-f67f-461b-b56d-d8d023bebde2-kube-api-access-g9rbp\") pod \"etcd-operator-b45778765-98w5w\" (UID: \"28521f3a-f67f-461b-b56d-d8d023bebde2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-98w5w" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723746 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jtkbh\" (UID: \"3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jtkbh" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723770 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e363f08-f4c4-4bdd-ac61-18f22f0ccc3d-webhook-cert\") pod \"packageserver-d55dfcdfc-6jdtp\" (UID: \"4e363f08-f4c4-4bdd-ac61-18f22f0ccc3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jdtp" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723788 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt5qq\" (UniqueName: \"kubernetes.io/projected/7e21dce7-f265-4965-b02a-64ca62ff71b4-kube-api-access-gt5qq\") pod \"multus-admission-controller-857f4d67dd-kb2jr\" (UID: \"7e21dce7-f265-4965-b02a-64ca62ff71b4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kb2jr" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723807 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4glq\" (UniqueName: \"kubernetes.io/projected/01f7dbf9-8d49-4dca-9285-221957d8df27-kube-api-access-j4glq\") pod \"ingress-canary-t2xxs\" (UID: \"01f7dbf9-8d49-4dca-9285-221957d8df27\") " pod="openshift-ingress-canary/ingress-canary-t2xxs" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723831 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/28521f3a-f67f-461b-b56d-d8d023bebde2-etcd-ca\") pod \"etcd-operator-b45778765-98w5w\" (UID: \"28521f3a-f67f-461b-b56d-d8d023bebde2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-98w5w" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723854 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxq7f\" (UniqueName: \"kubernetes.io/projected/73283b30-a00c-44f0-95a1-48257ac6ae48-kube-api-access-jxq7f\") pod \"service-ca-operator-777779d784-k2svc\" (UID: \"73283b30-a00c-44f0-95a1-48257ac6ae48\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k2svc" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723877 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73283b30-a00c-44f0-95a1-48257ac6ae48-serving-cert\") pod \"service-ca-operator-777779d784-k2svc\" (UID: \"73283b30-a00c-44f0-95a1-48257ac6ae48\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k2svc" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723893 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z64vv\" (UniqueName: \"kubernetes.io/projected/c7a17032-bac8-4534-8671-b0e660c7c685-kube-api-access-z64vv\") pod \"package-server-manager-789f6589d5-shltb\" (UID: \"c7a17032-bac8-4534-8671-b0e660c7c685\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-shltb" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723921 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f-config\") pod \"authentication-operator-69f744f599-jtkbh\" (UID: \"3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jtkbh" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723937 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7llzt\" (UniqueName: \"kubernetes.io/projected/6431707b-d1e3-449c-a20a-2b4906880ad2-kube-api-access-7llzt\") pod \"service-ca-9c57cc56f-vpbzh\" (UID: \"6431707b-d1e3-449c-a20a-2b4906880ad2\") " pod="openshift-service-ca/service-ca-9c57cc56f-vpbzh" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723956 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3865e56a-bf16-4cd0-b7c1-f74e98381c5b-stats-auth\") pod \"router-default-5444994796-snrcm\" (UID: \"3865e56a-bf16-4cd0-b7c1-f74e98381c5b\") " pod="openshift-ingress/router-default-5444994796-snrcm" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723975 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0f3177c-80c0-4074-adef-91e20ece02b8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4qxb4\" (UID: \"c0f3177c-80c0-4074-adef-91e20ece02b8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4qxb4" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.723993 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s24kc\" (UniqueName: \"kubernetes.io/projected/b62915f7-c2aa-468c-9cba-05ee4712a811-kube-api-access-s24kc\") pod \"machine-config-operator-74547568cd-twm6j\" (UID: \"b62915f7-c2aa-468c-9cba-05ee4712a811\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twm6j" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.724014 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28521f3a-f67f-461b-b56d-d8d023bebde2-serving-cert\") pod \"etcd-operator-b45778765-98w5w\" (UID: \"28521f3a-f67f-461b-b56d-d8d023bebde2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-98w5w" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.724078 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x9t8\" (UniqueName: \"kubernetes.io/projected/9a275d95-f049-409f-aa77-8ed837840461-kube-api-access-7x9t8\") pod \"kube-storage-version-migrator-operator-b67b599dd-5mlf9\" (UID: \"9a275d95-f049-409f-aa77-8ed837840461\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5mlf9" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.724095 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/332f48dc-fb71-4e6e-bb33-c416af7b743b-secret-volume\") pod \"collect-profiles-29490405-2tm2l\" (UID: \"332f48dc-fb71-4e6e-bb33-c416af7b743b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490405-2tm2l" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.724114 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqpx7\" (UniqueName: \"kubernetes.io/projected/a6ca7682-4544-4745-a71c-0f5b051d1f59-kube-api-access-kqpx7\") pod \"catalog-operator-68c6474976-qsq5c\" (UID: \"a6ca7682-4544-4745-a71c-0f5b051d1f59\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsq5c" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.724134 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/679cfe9e-496a-4d70-bfdf-7401ea224c8b-socket-dir\") pod \"csi-hostpathplugin-ff7zj\" (UID: \"679cfe9e-496a-4d70-bfdf-7401ea224c8b\") " pod="hostpath-provisioner/csi-hostpathplugin-ff7zj" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.724153 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2b01dc83-f1cc-447a-9b3c-3b9762465220-node-bootstrap-token\") pod \"machine-config-server-xr254\" (UID: \"2b01dc83-f1cc-447a-9b3c-3b9762465220\") " pod="openshift-machine-config-operator/machine-config-server-xr254" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.724178 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfvvk\" (UniqueName: \"kubernetes.io/projected/27a11a49-1783-4016-a84f-25b5c4eb9584-kube-api-access-tfvvk\") pod \"console-operator-58897d9998-6xg9r\" (UID: \"27a11a49-1783-4016-a84f-25b5c4eb9584\") " pod="openshift-console-operator/console-operator-58897d9998-6xg9r" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.724194 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nb87\" (UniqueName: \"kubernetes.io/projected/3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f-kube-api-access-5nb87\") pod \"authentication-operator-69f744f599-jtkbh\" (UID: \"3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jtkbh" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.724208 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/424db312-75a1-4a4e-954d-4100001cb1ef-trusted-ca\") pod \"ingress-operator-5b745b69d9-ftrqq\" (UID: \"424db312-75a1-4a4e-954d-4100001cb1ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ftrqq" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.724229 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a6ca7682-4544-4745-a71c-0f5b051d1f59-srv-cert\") pod \"catalog-operator-68c6474976-qsq5c\" (UID: \"a6ca7682-4544-4745-a71c-0f5b051d1f59\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsq5c" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.724257 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/122bf0d5-ee1a-417a-ae58-ced70bfe2abf-metrics-tls\") pod \"dns-default-8k974\" (UID: \"122bf0d5-ee1a-417a-ae58-ced70bfe2abf\") " pod="openshift-dns/dns-default-8k974" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.724277 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f-service-ca-bundle\") pod \"authentication-operator-69f744f599-jtkbh\" (UID: \"3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jtkbh" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.724296 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca8acf28-fab4-4031-8106-3a1a99f7717f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-vv9kd\" (UID: \"ca8acf28-fab4-4031-8106-3a1a99f7717f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vv9kd" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.724313 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7468d5f2-6ad2-480c-bad5-4bc64f6cbaab-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8qrcl\" (UID: \"7468d5f2-6ad2-480c-bad5-4bc64f6cbaab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8qrcl" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.725035 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6431707b-d1e3-449c-a20a-2b4906880ad2-signing-cabundle\") pod \"service-ca-9c57cc56f-vpbzh\" (UID: \"6431707b-d1e3-449c-a20a-2b4906880ad2\") " pod="openshift-service-ca/service-ca-9c57cc56f-vpbzh" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.725136 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b62915f7-c2aa-468c-9cba-05ee4712a811-auth-proxy-config\") pod \"machine-config-operator-74547568cd-twm6j\" (UID: \"b62915f7-c2aa-468c-9cba-05ee4712a811\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twm6j" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.726028 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7468d5f2-6ad2-480c-bad5-4bc64f6cbaab-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8qrcl\" (UID: \"7468d5f2-6ad2-480c-bad5-4bc64f6cbaab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8qrcl" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.727937 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a275d95-f049-409f-aa77-8ed837840461-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5mlf9\" (UID: \"9a275d95-f049-409f-aa77-8ed837840461\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5mlf9" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.728061 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/696254c7-95ab-43d9-9919-5d1146eec08e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-l54p9\" (UID: \"696254c7-95ab-43d9-9919-5d1146eec08e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l54p9" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.728607 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b62915f7-c2aa-468c-9cba-05ee4712a811-images\") pod \"machine-config-operator-74547568cd-twm6j\" (UID: \"b62915f7-c2aa-468c-9cba-05ee4712a811\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twm6j" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.730254 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/679cfe9e-496a-4d70-bfdf-7401ea224c8b-plugins-dir\") pod \"csi-hostpathplugin-ff7zj\" (UID: \"679cfe9e-496a-4d70-bfdf-7401ea224c8b\") " pod="hostpath-provisioner/csi-hostpathplugin-ff7zj" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.731154 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/679cfe9e-496a-4d70-bfdf-7401ea224c8b-mountpoint-dir\") pod \"csi-hostpathplugin-ff7zj\" (UID: \"679cfe9e-496a-4d70-bfdf-7401ea224c8b\") " pod="hostpath-provisioner/csi-hostpathplugin-ff7zj" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.731271 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/28521f3a-f67f-461b-b56d-d8d023bebde2-etcd-service-ca\") pod \"etcd-operator-b45778765-98w5w\" (UID: \"28521f3a-f67f-461b-b56d-d8d023bebde2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-98w5w" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.731303 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27a11a49-1783-4016-a84f-25b5c4eb9584-serving-cert\") pod \"console-operator-58897d9998-6xg9r\" (UID: \"27a11a49-1783-4016-a84f-25b5c4eb9584\") " pod="openshift-console-operator/console-operator-58897d9998-6xg9r" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.731357 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a275d95-f049-409f-aa77-8ed837840461-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5mlf9\" (UID: \"9a275d95-f049-409f-aa77-8ed837840461\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5mlf9" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.731902 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/da51f450-5541-4a98-85e4-f9ce8e81fc1a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xg2bk\" (UID: \"da51f450-5541-4a98-85e4-f9ce8e81fc1a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xg2bk" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.733328 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/da51f450-5541-4a98-85e4-f9ce8e81fc1a-srv-cert\") pod \"olm-operator-6b444d44fb-xg2bk\" (UID: \"da51f450-5541-4a98-85e4-f9ce8e81fc1a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xg2bk" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.733985 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/01f7dbf9-8d49-4dca-9285-221957d8df27-cert\") pod \"ingress-canary-t2xxs\" (UID: \"01f7dbf9-8d49-4dca-9285-221957d8df27\") " pod="openshift-ingress-canary/ingress-canary-t2xxs" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.734232 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27a11a49-1783-4016-a84f-25b5c4eb9584-config\") pod \"console-operator-58897d9998-6xg9r\" (UID: \"27a11a49-1783-4016-a84f-25b5c4eb9584\") " pod="openshift-console-operator/console-operator-58897d9998-6xg9r" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.734867 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73283b30-a00c-44f0-95a1-48257ac6ae48-config\") pod \"service-ca-operator-777779d784-k2svc\" (UID: \"73283b30-a00c-44f0-95a1-48257ac6ae48\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k2svc" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.734980 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/679cfe9e-496a-4d70-bfdf-7401ea224c8b-registration-dir\") pod \"csi-hostpathplugin-ff7zj\" (UID: \"679cfe9e-496a-4d70-bfdf-7401ea224c8b\") " pod="hostpath-provisioner/csi-hostpathplugin-ff7zj" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.735859 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/122bf0d5-ee1a-417a-ae58-ced70bfe2abf-config-volume\") pod \"dns-default-8k974\" (UID: \"122bf0d5-ee1a-417a-ae58-ced70bfe2abf\") " pod="openshift-dns/dns-default-8k974" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.736964 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7fd097cd-fa31-45f4-bd1d-f925d43f05cc-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-t67nv\" (UID: \"7fd097cd-fa31-45f4-bd1d-f925d43f05cc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t67nv" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.737321 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3865e56a-bf16-4cd0-b7c1-f74e98381c5b-service-ca-bundle\") pod \"router-default-5444994796-snrcm\" (UID: \"3865e56a-bf16-4cd0-b7c1-f74e98381c5b\") " pod="openshift-ingress/router-default-5444994796-snrcm" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.739982 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/28521f3a-f67f-461b-b56d-d8d023bebde2-etcd-client\") pod \"etcd-operator-b45778765-98w5w\" (UID: \"28521f3a-f67f-461b-b56d-d8d023bebde2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-98w5w" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.740717 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28521f3a-f67f-461b-b56d-d8d023bebde2-config\") pod \"etcd-operator-b45778765-98w5w\" (UID: \"28521f3a-f67f-461b-b56d-d8d023bebde2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-98w5w" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.746546 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f-config\") pod \"authentication-operator-69f744f599-jtkbh\" (UID: \"3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jtkbh" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.747254 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27a11a49-1783-4016-a84f-25b5c4eb9584-trusted-ca\") pod \"console-operator-58897d9998-6xg9r\" (UID: \"27a11a49-1783-4016-a84f-25b5c4eb9584\") " pod="openshift-console-operator/console-operator-58897d9998-6xg9r" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.749483 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4e363f08-f4c4-4bdd-ac61-18f22f0ccc3d-tmpfs\") pod \"packageserver-d55dfcdfc-6jdtp\" (UID: \"4e363f08-f4c4-4bdd-ac61-18f22f0ccc3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jdtp" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.752011 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c7a17032-bac8-4534-8671-b0e660c7c685-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-shltb\" (UID: \"c7a17032-bac8-4534-8671-b0e660c7c685\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-shltb" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.752639 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0026923c-f271-4a2c-864b-71f051a5f093-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5fqdl\" (UID: \"0026923c-f271-4a2c-864b-71f051a5f093\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fqdl" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.753107 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3865e56a-bf16-4cd0-b7c1-f74e98381c5b-default-certificate\") pod \"router-default-5444994796-snrcm\" (UID: \"3865e56a-bf16-4cd0-b7c1-f74e98381c5b\") " pod="openshift-ingress/router-default-5444994796-snrcm" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.753435 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f-serving-cert\") pod \"authentication-operator-69f744f599-jtkbh\" (UID: \"3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jtkbh" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.753535 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0f3177c-80c0-4074-adef-91e20ece02b8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4qxb4\" (UID: \"c0f3177c-80c0-4074-adef-91e20ece02b8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4qxb4" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.755117 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/679cfe9e-496a-4d70-bfdf-7401ea224c8b-csi-data-dir\") pod \"csi-hostpathplugin-ff7zj\" (UID: \"679cfe9e-496a-4d70-bfdf-7401ea224c8b\") " pod="hostpath-provisioner/csi-hostpathplugin-ff7zj" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.755849 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87b07f24-92c0-4190-a140-6029e82f826d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pn8z7\" (UID: \"87b07f24-92c0-4190-a140-6029e82f826d\") " pod="openshift-marketplace/marketplace-operator-79b997595-pn8z7" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.757084 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7468d5f2-6ad2-480c-bad5-4bc64f6cbaab-config\") pod \"kube-controller-manager-operator-78b949d7b-8qrcl\" (UID: \"7468d5f2-6ad2-480c-bad5-4bc64f6cbaab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8qrcl" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.759182 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/28521f3a-f67f-461b-b56d-d8d023bebde2-etcd-ca\") pod \"etcd-operator-b45778765-98w5w\" (UID: \"28521f3a-f67f-461b-b56d-d8d023bebde2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-98w5w" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.759378 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e363f08-f4c4-4bdd-ac61-18f22f0ccc3d-apiservice-cert\") pod \"packageserver-d55dfcdfc-6jdtp\" (UID: \"4e363f08-f4c4-4bdd-ac61-18f22f0ccc3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jdtp" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.760048 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b62915f7-c2aa-468c-9cba-05ee4712a811-proxy-tls\") pod \"machine-config-operator-74547568cd-twm6j\" (UID: \"b62915f7-c2aa-468c-9cba-05ee4712a811\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twm6j" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.760497 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7fd097cd-fa31-45f4-bd1d-f925d43f05cc-proxy-tls\") pod \"machine-config-controller-84d6567774-t67nv\" (UID: \"7fd097cd-fa31-45f4-bd1d-f925d43f05cc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t67nv" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.761046 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3865e56a-bf16-4cd0-b7c1-f74e98381c5b-stats-auth\") pod \"router-default-5444994796-snrcm\" (UID: \"3865e56a-bf16-4cd0-b7c1-f74e98381c5b\") " pod="openshift-ingress/router-default-5444994796-snrcm" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.761477 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2b01dc83-f1cc-447a-9b3c-3b9762465220-certs\") pod \"machine-config-server-xr254\" (UID: \"2b01dc83-f1cc-447a-9b3c-3b9762465220\") " pod="openshift-machine-config-operator/machine-config-server-xr254" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.762004 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e363f08-f4c4-4bdd-ac61-18f22f0ccc3d-webhook-cert\") pod \"packageserver-d55dfcdfc-6jdtp\" (UID: \"4e363f08-f4c4-4bdd-ac61-18f22f0ccc3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jdtp" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.762432 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca8acf28-fab4-4031-8106-3a1a99f7717f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-vv9kd\" (UID: \"ca8acf28-fab4-4031-8106-3a1a99f7717f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vv9kd" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.763188 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/424db312-75a1-4a4e-954d-4100001cb1ef-trusted-ca\") pod \"ingress-operator-5b745b69d9-ftrqq\" (UID: \"424db312-75a1-4a4e-954d-4100001cb1ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ftrqq" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.763556 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/679cfe9e-496a-4d70-bfdf-7401ea224c8b-socket-dir\") pod \"csi-hostpathplugin-ff7zj\" (UID: \"679cfe9e-496a-4d70-bfdf-7401ea224c8b\") " pod="hostpath-provisioner/csi-hostpathplugin-ff7zj" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.764305 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f-service-ca-bundle\") pod \"authentication-operator-69f744f599-jtkbh\" (UID: \"3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jtkbh" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.764811 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca8acf28-fab4-4031-8106-3a1a99f7717f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-vv9kd\" (UID: \"ca8acf28-fab4-4031-8106-3a1a99f7717f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vv9kd" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.766538 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7e21dce7-f265-4965-b02a-64ca62ff71b4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kb2jr\" (UID: \"7e21dce7-f265-4965-b02a-64ca62ff71b4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kb2jr" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.767500 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/332f48dc-fb71-4e6e-bb33-c416af7b743b-secret-volume\") pod \"collect-profiles-29490405-2tm2l\" (UID: \"332f48dc-fb71-4e6e-bb33-c416af7b743b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490405-2tm2l" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.770926 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73283b30-a00c-44f0-95a1-48257ac6ae48-serving-cert\") pod \"service-ca-operator-777779d784-k2svc\" (UID: \"73283b30-a00c-44f0-95a1-48257ac6ae48\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k2svc" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.770996 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a6ca7682-4544-4745-a71c-0f5b051d1f59-srv-cert\") pod \"catalog-operator-68c6474976-qsq5c\" (UID: \"a6ca7682-4544-4745-a71c-0f5b051d1f59\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsq5c" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.771119 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2b01dc83-f1cc-447a-9b3c-3b9762465220-node-bootstrap-token\") pod \"machine-config-server-xr254\" (UID: \"2b01dc83-f1cc-447a-9b3c-3b9762465220\") " pod="openshift-machine-config-operator/machine-config-server-xr254" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.773365 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28521f3a-f67f-461b-b56d-d8d023bebde2-serving-cert\") pod \"etcd-operator-b45778765-98w5w\" (UID: \"28521f3a-f67f-461b-b56d-d8d023bebde2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-98w5w" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.780292 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6431707b-d1e3-449c-a20a-2b4906880ad2-signing-key\") pod \"service-ca-9c57cc56f-vpbzh\" (UID: \"6431707b-d1e3-449c-a20a-2b4906880ad2\") " pod="openshift-service-ca/service-ca-9c57cc56f-vpbzh" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.781865 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3865e56a-bf16-4cd0-b7c1-f74e98381c5b-metrics-certs\") pod \"router-default-5444994796-snrcm\" (UID: \"3865e56a-bf16-4cd0-b7c1-f74e98381c5b\") " pod="openshift-ingress/router-default-5444994796-snrcm" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.789089 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zc6c9"] Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.790006 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/87b07f24-92c0-4190-a140-6029e82f826d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pn8z7\" (UID: \"87b07f24-92c0-4190-a140-6029e82f826d\") " pod="openshift-marketplace/marketplace-operator-79b997595-pn8z7" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.790499 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/122bf0d5-ee1a-417a-ae58-ced70bfe2abf-metrics-tls\") pod \"dns-default-8k974\" (UID: \"122bf0d5-ee1a-417a-ae58-ced70bfe2abf\") " pod="openshift-dns/dns-default-8k974" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.793066 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0f3177c-80c0-4074-adef-91e20ece02b8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4qxb4\" (UID: \"c0f3177c-80c0-4074-adef-91e20ece02b8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4qxb4" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.793268 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/424db312-75a1-4a4e-954d-4100001cb1ef-metrics-tls\") pod \"ingress-operator-5b745b69d9-ftrqq\" (UID: \"424db312-75a1-4a4e-954d-4100001cb1ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ftrqq" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.805402 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a6ca7682-4544-4745-a71c-0f5b051d1f59-profile-collector-cert\") pod \"catalog-operator-68c6474976-qsq5c\" (UID: \"a6ca7682-4544-4745-a71c-0f5b051d1f59\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsq5c" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.816273 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7468d5f2-6ad2-480c-bad5-4bc64f6cbaab-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8qrcl\" (UID: \"7468d5f2-6ad2-480c-bad5-4bc64f6cbaab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8qrcl" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.817140 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmwf5\" (UniqueName: \"kubernetes.io/projected/0026923c-f271-4a2c-864b-71f051a5f093-kube-api-access-zmwf5\") pod \"cluster-samples-operator-665b6dd947-5fqdl\" (UID: \"0026923c-f271-4a2c-864b-71f051a5f093\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fqdl" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.819404 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c0f3177c-80c0-4074-adef-91e20ece02b8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4qxb4\" (UID: \"c0f3177c-80c0-4074-adef-91e20ece02b8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4qxb4" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.828642 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.828944 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8qrcl" Jan 26 10:57:28 crc kubenswrapper[4619]: E0126 10:57:28.833021 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:29.332995906 +0000 UTC m=+148.367036622 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.834186 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4qxb4" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.842590 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgn8c\" (UniqueName: \"kubernetes.io/projected/679cfe9e-496a-4d70-bfdf-7401ea224c8b-kube-api-access-hgn8c\") pod \"csi-hostpathplugin-ff7zj\" (UID: \"679cfe9e-496a-4d70-bfdf-7401ea224c8b\") " pod="hostpath-provisioner/csi-hostpathplugin-ff7zj" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.854519 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gwzgx"] Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.866585 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbfh8\" (UniqueName: \"kubernetes.io/projected/424db312-75a1-4a4e-954d-4100001cb1ef-kube-api-access-sbfh8\") pod \"ingress-operator-5b745b69d9-ftrqq\" (UID: \"424db312-75a1-4a4e-954d-4100001cb1ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ftrqq" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.873132 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqdkh\" (UniqueName: \"kubernetes.io/projected/332f48dc-fb71-4e6e-bb33-c416af7b743b-kube-api-access-kqdkh\") pod \"collect-profiles-29490405-2tm2l\" (UID: \"332f48dc-fb71-4e6e-bb33-c416af7b743b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490405-2tm2l" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.903458 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6ztx\" (UniqueName: \"kubernetes.io/projected/7fd097cd-fa31-45f4-bd1d-f925d43f05cc-kube-api-access-m6ztx\") pod \"machine-config-controller-84d6567774-t67nv\" (UID: \"7fd097cd-fa31-45f4-bd1d-f925d43f05cc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t67nv" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.903739 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv594\" (UniqueName: \"kubernetes.io/projected/08d7d14b-8c4c-4bb6-99d4-71618136533b-kube-api-access-cv594\") pod \"migrator-59844c95c7-z2hch\" (UID: \"08d7d14b-8c4c-4bb6-99d4-71618136533b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z2hch" Jan 26 10:57:28 crc kubenswrapper[4619]: W0126 10:57:28.920731 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bcc19ee_a154_482d_84f3_3c8aed73db25.slice/crio-5545d2c1f70de6adc3606b2f3f435c4061dacf88cc9f73fd7d903a7606478f7d WatchSource:0}: Error finding container 5545d2c1f70de6adc3606b2f3f435c4061dacf88cc9f73fd7d903a7606478f7d: Status 404 returned error can't find the container with id 5545d2c1f70de6adc3606b2f3f435c4061dacf88cc9f73fd7d903a7606478f7d Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.923383 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z64vv\" (UniqueName: \"kubernetes.io/projected/c7a17032-bac8-4534-8671-b0e660c7c685-kube-api-access-z64vv\") pod \"package-server-manager-789f6589d5-shltb\" (UID: \"c7a17032-bac8-4534-8671-b0e660c7c685\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-shltb" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.927650 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490405-2tm2l" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.932410 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:28 crc kubenswrapper[4619]: E0126 10:57:28.932631 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:29.432583518 +0000 UTC m=+148.466624234 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.947420 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qtdv\" (UniqueName: \"kubernetes.io/projected/87b07f24-92c0-4190-a140-6029e82f826d-kube-api-access-6qtdv\") pod \"marketplace-operator-79b997595-pn8z7\" (UID: \"87b07f24-92c0-4190-a140-6029e82f826d\") " pod="openshift-marketplace/marketplace-operator-79b997595-pn8z7" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.959859 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.960461 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z2hch" Jan 26 10:57:28 crc kubenswrapper[4619]: E0126 10:57:28.960645 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:29.460628416 +0000 UTC m=+148.494669132 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.970852 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-shltb" Jan 26 10:57:28 crc kubenswrapper[4619]: I0126 10:57:28.975987 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmmn4\" (UniqueName: \"kubernetes.io/projected/696254c7-95ab-43d9-9919-5d1146eec08e-kube-api-access-vmmn4\") pod \"control-plane-machine-set-operator-78cbb6b69f-l54p9\" (UID: \"696254c7-95ab-43d9-9919-5d1146eec08e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l54p9" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.003570 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nptd\" (UniqueName: \"kubernetes.io/projected/ca8acf28-fab4-4031-8106-3a1a99f7717f-kube-api-access-6nptd\") pod \"openshift-controller-manager-operator-756b6f6bc6-vv9kd\" (UID: \"ca8acf28-fab4-4031-8106-3a1a99f7717f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vv9kd" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.008490 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-ff7zj" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.009426 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-kq9qd"] Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.036065 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fqdl" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.037218 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvs6f\" (UniqueName: \"kubernetes.io/projected/3865e56a-bf16-4cd0-b7c1-f74e98381c5b-kube-api-access-jvs6f\") pod \"router-default-5444994796-snrcm\" (UID: \"3865e56a-bf16-4cd0-b7c1-f74e98381c5b\") " pod="openshift-ingress/router-default-5444994796-snrcm" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.039971 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/424db312-75a1-4a4e-954d-4100001cb1ef-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ftrqq\" (UID: \"424db312-75a1-4a4e-954d-4100001cb1ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ftrqq" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.043195 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6scv\" (UniqueName: \"kubernetes.io/projected/122bf0d5-ee1a-417a-ae58-ced70bfe2abf-kube-api-access-f6scv\") pod \"dns-default-8k974\" (UID: \"122bf0d5-ee1a-417a-ae58-ced70bfe2abf\") " pod="openshift-dns/dns-default-8k974" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.064399 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:29 crc kubenswrapper[4619]: E0126 10:57:29.065040 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:29.565018013 +0000 UTC m=+148.599058739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.075997 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsn62\" (UniqueName: \"kubernetes.io/projected/4e363f08-f4c4-4bdd-ac61-18f22f0ccc3d-kube-api-access-jsn62\") pod \"packageserver-d55dfcdfc-6jdtp\" (UID: \"4e363f08-f4c4-4bdd-ac61-18f22f0ccc3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jdtp" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.082352 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vv9kd" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.104224 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ftrqq" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.111887 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqpx7\" (UniqueName: \"kubernetes.io/projected/a6ca7682-4544-4745-a71c-0f5b051d1f59-kube-api-access-kqpx7\") pod \"catalog-operator-68c6474976-qsq5c\" (UID: \"a6ca7682-4544-4745-a71c-0f5b051d1f59\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsq5c" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.129350 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7llzt\" (UniqueName: \"kubernetes.io/projected/6431707b-d1e3-449c-a20a-2b4906880ad2-kube-api-access-7llzt\") pod \"service-ca-9c57cc56f-vpbzh\" (UID: \"6431707b-d1e3-449c-a20a-2b4906880ad2\") " pod="openshift-service-ca/service-ca-9c57cc56f-vpbzh" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.146887 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-snrcm" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.152478 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk6vg\" (UniqueName: \"kubernetes.io/projected/da51f450-5541-4a98-85e4-f9ce8e81fc1a-kube-api-access-lk6vg\") pod \"olm-operator-6b444d44fb-xg2bk\" (UID: \"da51f450-5541-4a98-85e4-f9ce8e81fc1a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xg2bk" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.168265 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t67nv" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.169015 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s24kc\" (UniqueName: \"kubernetes.io/projected/b62915f7-c2aa-468c-9cba-05ee4712a811-kube-api-access-s24kc\") pod \"machine-config-operator-74547568cd-twm6j\" (UID: \"b62915f7-c2aa-468c-9cba-05ee4712a811\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twm6j" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.169024 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.176166 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vpbzh" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.182286 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xg2bk" Jan 26 10:57:29 crc kubenswrapper[4619]: E0126 10:57:29.189404 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:29.689379497 +0000 UTC m=+148.723420223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.200672 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l54p9" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.203108 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g6cvw"] Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.208326 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x9t8\" (UniqueName: \"kubernetes.io/projected/9a275d95-f049-409f-aa77-8ed837840461-kube-api-access-7x9t8\") pod \"kube-storage-version-migrator-operator-b67b599dd-5mlf9\" (UID: \"9a275d95-f049-409f-aa77-8ed837840461\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5mlf9" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.217233 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9rbp\" (UniqueName: \"kubernetes.io/projected/28521f3a-f67f-461b-b56d-d8d023bebde2-kube-api-access-g9rbp\") pod \"etcd-operator-b45778765-98w5w\" (UID: \"28521f3a-f67f-461b-b56d-d8d023bebde2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-98w5w" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.220387 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zdjpz"] Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.226381 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zww6h\" (UniqueName: \"kubernetes.io/projected/2b01dc83-f1cc-447a-9b3c-3b9762465220-kube-api-access-zww6h\") pod \"machine-config-server-xr254\" (UID: \"2b01dc83-f1cc-447a-9b3c-3b9762465220\") " pod="openshift-machine-config-operator/machine-config-server-xr254" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.232229 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsq5c" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.237857 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pn8z7" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.243336 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt5qq\" (UniqueName: \"kubernetes.io/projected/7e21dce7-f265-4965-b02a-64ca62ff71b4-kube-api-access-gt5qq\") pod \"multus-admission-controller-857f4d67dd-kb2jr\" (UID: \"7e21dce7-f265-4965-b02a-64ca62ff71b4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kb2jr" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.247098 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-xr254" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.259680 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jdtp" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.280092 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.280775 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-8k974" Jan 26 10:57:29 crc kubenswrapper[4619]: E0126 10:57:29.280972 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:29.780944481 +0000 UTC m=+148.814985197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.285328 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4glq\" (UniqueName: \"kubernetes.io/projected/01f7dbf9-8d49-4dca-9285-221957d8df27-kube-api-access-j4glq\") pod \"ingress-canary-t2xxs\" (UID: \"01f7dbf9-8d49-4dca-9285-221957d8df27\") " pod="openshift-ingress-canary/ingress-canary-t2xxs" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.287297 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxq7f\" (UniqueName: \"kubernetes.io/projected/73283b30-a00c-44f0-95a1-48257ac6ae48-kube-api-access-jxq7f\") pod \"service-ca-operator-777779d784-k2svc\" (UID: \"73283b30-a00c-44f0-95a1-48257ac6ae48\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k2svc" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.339594 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-llkgb"] Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.344679 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfvvk\" (UniqueName: \"kubernetes.io/projected/27a11a49-1783-4016-a84f-25b5c4eb9584-kube-api-access-tfvvk\") pod \"console-operator-58897d9998-6xg9r\" (UID: \"27a11a49-1783-4016-a84f-25b5c4eb9584\") " pod="openshift-console-operator/console-operator-58897d9998-6xg9r" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.347094 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nb87\" (UniqueName: \"kubernetes.io/projected/3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f-kube-api-access-5nb87\") pod \"authentication-operator-69f744f599-jtkbh\" (UID: \"3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jtkbh" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.352267 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jtkbh" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.370937 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5mlf9" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.381825 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:29 crc kubenswrapper[4619]: E0126 10:57:29.382343 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:29.88232218 +0000 UTC m=+148.916362886 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.423548 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-98w5w" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.462317 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twm6j" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.483398 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:29 crc kubenswrapper[4619]: E0126 10:57:29.483956 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:29.983929845 +0000 UTC m=+149.017970561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.496925 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-k2svc" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.508989 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kb2jr" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.577046 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-t2xxs" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.588145 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:29 crc kubenswrapper[4619]: E0126 10:57:29.588792 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:30.088777844 +0000 UTC m=+149.122818560 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.610244 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6xg9r" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.629005 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8qrcl"] Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.663183 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bctm2" event={"ID":"41dc8f80-5742-4e4b-943e-571ad0e59027","Type":"ContainerStarted","Data":"7a0a375ea9f7f7803fc6394e93d3ed5062e0b64d1272c0a0710dc1c48a190c13"} Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.677098 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"12ee63131808adfd6a4c5ded4ecf6b491ea000668405dd9a42c6af59f4bb24fa"} Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.679076 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" event={"ID":"20ef7efa-6ea4-45aa-b18b-af795f8d0758","Type":"ContainerStarted","Data":"2f5c23ddcf5c1a0782227c06c2a039c260749e9ac3eb8ac0c76138130601edd4"} Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.693721 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.694393 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"faa5c83acb2401d9e99228efe85c10da2772a5b07f71386dcacdb3206eaabfd3"} Jan 26 10:57:29 crc kubenswrapper[4619]: E0126 10:57:29.694398 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:30.194352951 +0000 UTC m=+149.228393667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.699374 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"8a2d9d6f12090c05b56cb3d552ec37bab90b12496d9e81bcead427d6de0ad4ff"} Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.700311 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.701770 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" event={"ID":"81afd38e-4b98-450d-89b1-06efe9f059e8","Type":"ContainerStarted","Data":"818bfdeac739d610455806698c37644e1aca011fb7fac8241a9484c51ae20853"} Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.702634 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.703590 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zdjpz" event={"ID":"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa","Type":"ContainerStarted","Data":"e0ea170810ac0eb7bb08789570ca844617ebd641bbced771082d76ef03661f0b"} Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.750351 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8hpj" event={"ID":"37ac2fdc-a31f-4bb2-91cf-962b10add71c","Type":"ContainerStarted","Data":"b127846f7023463f1d77a4840a16ed13b02fbb88c09a34d69b67889cb351d3dd"} Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.758627 4619 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-6jj4w container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.758725 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" podUID="81afd38e-4b98-450d-89b1-06efe9f059e8" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.771903 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-llkgb" event={"ID":"1749757b-13ec-4692-8194-0816220d378c","Type":"ContainerStarted","Data":"ad5bec40ea73aae062e68b7273ddb5376ae3ea99103642312c0e39c86ec27fe1"} Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.787329 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-kq9qd" event={"ID":"554090f8-56c3-48d5-ab2e-082c8038b972","Type":"ContainerStarted","Data":"860710c8a847f4be56ab23599d670022c62af70ac11aa0549a86d3ce6998a5fb"} Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.796383 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:29 crc kubenswrapper[4619]: E0126 10:57:29.800873 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:30.300851972 +0000 UTC m=+149.334892688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.812660 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hk46v" event={"ID":"0bff9a47-4685-457d-8a24-6139113cdbd8","Type":"ContainerStarted","Data":"3a2e1bdfdcac57204d0bc3841f5c85b2a8ab7af1c055a12559ed3b22dbd865e2"} Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.812844 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hk46v" event={"ID":"0bff9a47-4685-457d-8a24-6139113cdbd8","Type":"ContainerStarted","Data":"6f746361e57087003e0539f4fffec5f8df0f6d02ef9bfd78eed83ad1e524ae9a"} Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.816668 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-th5wb" event={"ID":"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689","Type":"ContainerStarted","Data":"631124f4a43d23b37fab2ec7a5187b803c3c838599788ebab00619d2922af26f"} Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.819717 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pp656"] Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.821550 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-k4prd"] Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.828885 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zc6c9" event={"ID":"178692b1-15f3-46ad-b4ca-abc585040e46","Type":"ContainerStarted","Data":"5701a6d9a7ce7b329aa5b06f5e535d150297a262efa0cfe16b9bede8f1f427db"} Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.829045 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zc6c9" event={"ID":"178692b1-15f3-46ad-b4ca-abc585040e46","Type":"ContainerStarted","Data":"cd5701b34ca47293be2190724c777f314082f8f09c46f245d6ed53963f7ef6cd"} Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.832587 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g6cvw" event={"ID":"0e239a69-8537-46b7-a0e0-30d8382b4e22","Type":"ContainerStarted","Data":"1ebd75002c5992b2e9ff5bd0d7ed3fb0cb356f10c2b7033576eee9a433fc1f14"} Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.848874 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gwzgx" event={"ID":"5bcc19ee-a154-482d-84f3-3c8aed73db25","Type":"ContainerStarted","Data":"b92f8d2dc9bd1587d57f2ec206986ed3f30ffae41beaa588926f0215d6267611"} Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.848936 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gwzgx" event={"ID":"5bcc19ee-a154-482d-84f3-3c8aed73db25","Type":"ContainerStarted","Data":"5545d2c1f70de6adc3606b2f3f435c4061dacf88cc9f73fd7d903a7606478f7d"} Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.850033 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-gwzgx" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.865826 4619 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-gwzgx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.865901 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-gwzgx" podUID="5bcc19ee-a154-482d-84f3-3c8aed73db25" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.898069 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:29 crc kubenswrapper[4619]: E0126 10:57:29.899459 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:30.399424829 +0000 UTC m=+149.433465545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.900524 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:29 crc kubenswrapper[4619]: E0126 10:57:29.912950 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:30.412929748 +0000 UTC m=+149.446970464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:29 crc kubenswrapper[4619]: I0126 10:57:29.996978 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4qxb4"] Jan 26 10:57:30 crc kubenswrapper[4619]: I0126 10:57:30.003943 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:30 crc kubenswrapper[4619]: E0126 10:57:30.005513 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:30.505494978 +0000 UTC m=+149.539535684 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:30 crc kubenswrapper[4619]: I0126 10:57:30.059433 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490405-2tm2l"] Jan 26 10:57:30 crc kubenswrapper[4619]: I0126 10:57:30.076549 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-z2hch"] Jan 26 10:57:30 crc kubenswrapper[4619]: I0126 10:57:30.080649 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ftrqq"] Jan 26 10:57:30 crc kubenswrapper[4619]: I0126 10:57:30.106862 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:30 crc kubenswrapper[4619]: E0126 10:57:30.107309 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:30.607293098 +0000 UTC m=+149.641333814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:30 crc kubenswrapper[4619]: W0126 10:57:30.150732 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod332f48dc_fb71_4e6e_bb33_c416af7b743b.slice/crio-e0a2d6b6be407953821540e1c3824963d1bb8d7d2398b39b0eae6da225d5724a WatchSource:0}: Error finding container e0a2d6b6be407953821540e1c3824963d1bb8d7d2398b39b0eae6da225d5724a: Status 404 returned error can't find the container with id e0a2d6b6be407953821540e1c3824963d1bb8d7d2398b39b0eae6da225d5724a Jan 26 10:57:30 crc kubenswrapper[4619]: I0126 10:57:30.164670 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-ff7zj"] Jan 26 10:57:30 crc kubenswrapper[4619]: I0126 10:57:30.210727 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:30 crc kubenswrapper[4619]: E0126 10:57:30.211291 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:30.711248573 +0000 UTC m=+149.745289289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:30 crc kubenswrapper[4619]: I0126 10:57:30.216849 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-shltb"] Jan 26 10:57:30 crc kubenswrapper[4619]: I0126 10:57:30.314419 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vv9kd"] Jan 26 10:57:30 crc kubenswrapper[4619]: I0126 10:57:30.315459 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:30 crc kubenswrapper[4619]: E0126 10:57:30.315912 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:30.815898057 +0000 UTC m=+149.849938773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:30 crc kubenswrapper[4619]: I0126 10:57:30.406089 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-gwzgx" podStartSLOduration=126.406065005 podStartE2EDuration="2m6.406065005s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:30.404942306 +0000 UTC m=+149.438983022" watchObservedRunningTime="2026-01-26 10:57:30.406065005 +0000 UTC m=+149.440105721" Jan 26 10:57:30 crc kubenswrapper[4619]: I0126 10:57:30.416791 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:30 crc kubenswrapper[4619]: E0126 10:57:30.417231 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:30.917209524 +0000 UTC m=+149.951250240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:30 crc kubenswrapper[4619]: I0126 10:57:30.470227 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bctm2" Jan 26 10:57:30 crc kubenswrapper[4619]: I0126 10:57:30.518488 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:30 crc kubenswrapper[4619]: E0126 10:57:30.519320 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:31.019306411 +0000 UTC m=+150.053347127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:30 crc kubenswrapper[4619]: I0126 10:57:30.534664 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zc6c9" podStartSLOduration=126.534645119 podStartE2EDuration="2m6.534645119s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:30.533844099 +0000 UTC m=+149.567884815" watchObservedRunningTime="2026-01-26 10:57:30.534645119 +0000 UTC m=+149.568685835" Jan 26 10:57:30 crc kubenswrapper[4619]: I0126 10:57:30.624047 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:30 crc kubenswrapper[4619]: E0126 10:57:30.624549 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:31.124527329 +0000 UTC m=+150.158568045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:30 crc kubenswrapper[4619]: E0126 10:57:30.730955 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:31.230939549 +0000 UTC m=+150.264980265 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:30 crc kubenswrapper[4619]: I0126 10:57:30.731322 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:30 crc kubenswrapper[4619]: I0126 10:57:30.746751 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fqdl"] Jan 26 10:57:30 crc kubenswrapper[4619]: I0126 10:57:30.835167 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:30 crc kubenswrapper[4619]: E0126 10:57:30.835668 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:31.335649854 +0000 UTC m=+150.369690570 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:30 crc kubenswrapper[4619]: I0126 10:57:30.920205 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" podStartSLOduration=127.920173966 podStartE2EDuration="2m7.920173966s" podCreationTimestamp="2026-01-26 10:55:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:30.767309852 +0000 UTC m=+149.801350568" watchObservedRunningTime="2026-01-26 10:57:30.920173966 +0000 UTC m=+149.954214682" Jan 26 10:57:30 crc kubenswrapper[4619]: I0126 10:57:30.937545 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:30 crc kubenswrapper[4619]: E0126 10:57:30.938010 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:31.437992928 +0000 UTC m=+150.472033644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:30 crc kubenswrapper[4619]: I0126 10:57:30.945328 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-8k974"] Jan 26 10:57:30 crc kubenswrapper[4619]: I0126 10:57:30.960018 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bctm2" podStartSLOduration=126.956624661 podStartE2EDuration="2m6.956624661s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:30.864073972 +0000 UTC m=+149.898114688" watchObservedRunningTime="2026-01-26 10:57:30.956624661 +0000 UTC m=+149.990665367" Jan 26 10:57:30 crc kubenswrapper[4619]: I0126 10:57:30.985737 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pp656" event={"ID":"5b65f743-5226-4893-bcbd-41a055959448","Type":"ContainerStarted","Data":"b65c70b83f372f8fe57a2b558709693f97730776a9a11322d221612228809631"} Jan 26 10:57:31 crc kubenswrapper[4619]: I0126 10:57:31.040881 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:31 crc kubenswrapper[4619]: E0126 10:57:31.041269 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:31.541247705 +0000 UTC m=+150.575288421 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:31 crc kubenswrapper[4619]: I0126 10:57:31.050661 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-k4prd" event={"ID":"97a177f5-24c5-4f7e-9bc6-8c234fe0cf19","Type":"ContainerStarted","Data":"9fa7d0343a768d94e1aed89caade8c35776fb45273d222f8f146255f90b53826"} Jan 26 10:57:31 crc kubenswrapper[4619]: I0126 10:57:31.066654 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-snrcm" event={"ID":"3865e56a-bf16-4cd0-b7c1-f74e98381c5b","Type":"ContainerStarted","Data":"e75907c3ea42810de373968b3ee99d9a8b5d5dbc6499a1005f029a53457a63eb"} Jan 26 10:57:31 crc kubenswrapper[4619]: I0126 10:57:31.082812 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-shltb" event={"ID":"c7a17032-bac8-4534-8671-b0e660c7c685","Type":"ContainerStarted","Data":"ed90c3bde32afc26c1a7e058c6c233f8906cacb2695e06dac6903449e63fc58c"} Jan 26 10:57:31 crc kubenswrapper[4619]: I0126 10:57:31.122254 4619 generic.go:334] "Generic (PLEG): container finished" podID="20ef7efa-6ea4-45aa-b18b-af795f8d0758" containerID="2f5c23ddcf5c1a0782227c06c2a039c260749e9ac3eb8ac0c76138130601edd4" exitCode=0 Jan 26 10:57:31 crc kubenswrapper[4619]: I0126 10:57:31.122388 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" event={"ID":"20ef7efa-6ea4-45aa-b18b-af795f8d0758","Type":"ContainerDied","Data":"2f5c23ddcf5c1a0782227c06c2a039c260749e9ac3eb8ac0c76138130601edd4"} Jan 26 10:57:31 crc kubenswrapper[4619]: I0126 10:57:31.145177 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:31 crc kubenswrapper[4619]: I0126 10:57:31.149731 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4qxb4" event={"ID":"c0f3177c-80c0-4074-adef-91e20ece02b8","Type":"ContainerStarted","Data":"63f29405dd1fda06a8d093791383a205624e2308ca9bcaca49690997a5902733"} Jan 26 10:57:31 crc kubenswrapper[4619]: E0126 10:57:31.150711 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:31.650688043 +0000 UTC m=+150.684728759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:31 crc kubenswrapper[4619]: I0126 10:57:31.173772 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-xr254" event={"ID":"2b01dc83-f1cc-447a-9b3c-3b9762465220","Type":"ContainerStarted","Data":"58961d641d20a96d184c1f36d050747c717f3e9747fe94f501ffcc03a91749d2"} Jan 26 10:57:31 crc kubenswrapper[4619]: I0126 10:57:31.175741 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z2hch" event={"ID":"08d7d14b-8c4c-4bb6-99d4-71618136533b","Type":"ContainerStarted","Data":"11b58841a6bab01f396295285d649822ac9361d1ab0cd24977fc4086a8f47081"} Jan 26 10:57:31 crc kubenswrapper[4619]: I0126 10:57:31.195807 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8qrcl" event={"ID":"7468d5f2-6ad2-480c-bad5-4bc64f6cbaab","Type":"ContainerStarted","Data":"6095ca3f4e09caf990af09e7e77d22a8a696328f83d2e246617ff274c383db3b"} Jan 26 10:57:31 crc kubenswrapper[4619]: I0126 10:57:31.198347 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ftrqq" event={"ID":"424db312-75a1-4a4e-954d-4100001cb1ef","Type":"ContainerStarted","Data":"febd41053e398cca3bbbbfc8c7e9ea9dadb5a2276ec08711af67b3f488db2015"} Jan 26 10:57:31 crc kubenswrapper[4619]: I0126 10:57:31.200423 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ff7zj" event={"ID":"679cfe9e-496a-4d70-bfdf-7401ea224c8b","Type":"ContainerStarted","Data":"8b0ac43e96a68bcdceb9493c5c22adec63ff08cf9711980a361f31eed1c7fd99"} Jan 26 10:57:31 crc kubenswrapper[4619]: I0126 10:57:31.203164 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490405-2tm2l" event={"ID":"332f48dc-fb71-4e6e-bb33-c416af7b743b","Type":"ContainerStarted","Data":"e0a2d6b6be407953821540e1c3824963d1bb8d7d2398b39b0eae6da225d5724a"} Jan 26 10:57:31 crc kubenswrapper[4619]: I0126 10:57:31.204735 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vv9kd" event={"ID":"ca8acf28-fab4-4031-8106-3a1a99f7717f","Type":"ContainerStarted","Data":"27a94a136c0c244caa90f861ca028b745dd205294458e0ecbd3a2c74c26635bc"} Jan 26 10:57:31 crc kubenswrapper[4619]: I0126 10:57:31.207301 4619 generic.go:334] "Generic (PLEG): container finished" podID="1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689" containerID="ccb51ab914b64a0ed34d1515d117e73d80c341113496358ff45f75cbc70bd3d2" exitCode=0 Jan 26 10:57:31 crc kubenswrapper[4619]: I0126 10:57:31.207361 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-th5wb" event={"ID":"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689","Type":"ContainerDied","Data":"ccb51ab914b64a0ed34d1515d117e73d80c341113496358ff45f75cbc70bd3d2"} Jan 26 10:57:31 crc kubenswrapper[4619]: I0126 10:57:31.209992 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8hpj" event={"ID":"37ac2fdc-a31f-4bb2-91cf-962b10add71c","Type":"ContainerStarted","Data":"622a69f8ad8b9e6fb43d0de3afd2c6587f6a0fc00774e043c72b2d06311713de"} Jan 26 10:57:31 crc kubenswrapper[4619]: I0126 10:57:31.244420 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-gwzgx" Jan 26 10:57:31 crc kubenswrapper[4619]: I0126 10:57:31.246471 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:31 crc kubenswrapper[4619]: E0126 10:57:31.246723 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:31.746697023 +0000 UTC m=+150.780737739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:31 crc kubenswrapper[4619]: I0126 10:57:31.246972 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:31 crc kubenswrapper[4619]: E0126 10:57:31.247414 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:31.747404431 +0000 UTC m=+150.781445147 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:31 crc kubenswrapper[4619]: I0126 10:57:31.250547 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsq5c"] Jan 26 10:57:31 crc kubenswrapper[4619]: I0126 10:57:31.362390 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:31 crc kubenswrapper[4619]: E0126 10:57:31.370714 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:31.870693037 +0000 UTC m=+150.904733753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:31 crc kubenswrapper[4619]: I0126 10:57:31.485088 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:31 crc kubenswrapper[4619]: E0126 10:57:31.485450 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:31.985437303 +0000 UTC m=+151.019478009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:31 crc kubenswrapper[4619]: I0126 10:57:31.511589 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:57:31 crc kubenswrapper[4619]: I0126 10:57:31.609136 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:31 crc kubenswrapper[4619]: E0126 10:57:31.609728 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:32.109704105 +0000 UTC m=+151.143744821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:31 crc kubenswrapper[4619]: I0126 10:57:31.710628 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:31 crc kubenswrapper[4619]: E0126 10:57:31.711166 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:32.211149676 +0000 UTC m=+151.245190392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:31 crc kubenswrapper[4619]: I0126 10:57:31.812889 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:31 crc kubenswrapper[4619]: E0126 10:57:31.813599 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:32.313583232 +0000 UTC m=+151.347623938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:31 crc kubenswrapper[4619]: I0126 10:57:31.900734 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jdtp"] Jan 26 10:57:31 crc kubenswrapper[4619]: I0126 10:57:31.922337 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:31 crc kubenswrapper[4619]: E0126 10:57:31.922737 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:32.422725422 +0000 UTC m=+151.456766138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.024752 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:32 crc kubenswrapper[4619]: E0126 10:57:32.025227 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:32.525209319 +0000 UTC m=+151.559250035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.126455 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:32 crc kubenswrapper[4619]: E0126 10:57:32.129027 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:32.628991 +0000 UTC m=+151.663031706 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.240990 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:32 crc kubenswrapper[4619]: E0126 10:57:32.241409 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:32.741390834 +0000 UTC m=+151.775431550 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.321996 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vpbzh"] Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.342603 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:32 crc kubenswrapper[4619]: E0126 10:57:32.343134 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:32.843099832 +0000 UTC m=+151.877140548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.410701 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-k4prd" event={"ID":"97a177f5-24c5-4f7e-9bc6-8c234fe0cf19","Type":"ContainerStarted","Data":"9358c4104d02bcfa12f7f50cdb0faf478c1f65de7f22f12fadf7cde5df80f53a"} Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.411594 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-k4prd" Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.447586 4619 patch_prober.go:28] interesting pod/downloads-7954f5f757-k4prd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.447665 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k4prd" podUID="97a177f5-24c5-4f7e-9bc6-8c234fe0cf19" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.448488 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:32 crc kubenswrapper[4619]: E0126 10:57:32.449065 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:32.94904813 +0000 UTC m=+151.983088846 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.551259 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:32 crc kubenswrapper[4619]: E0126 10:57:32.552583 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:33.052563623 +0000 UTC m=+152.086604339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.588941 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-k4prd" podStartSLOduration=128.588916835 podStartE2EDuration="2m8.588916835s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:32.585775494 +0000 UTC m=+151.619816230" watchObservedRunningTime="2026-01-26 10:57:32.588916835 +0000 UTC m=+151.622957551" Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.592446 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pp656" event={"ID":"5b65f743-5226-4893-bcbd-41a055959448","Type":"ContainerStarted","Data":"bd045bf2f5f91195087f54dc55627c43f3e6d14b74f7c15b9d4e0c6812d1b0f8"} Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.653517 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:32 crc kubenswrapper[4619]: E0126 10:57:32.659350 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:33.159312652 +0000 UTC m=+152.193353368 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.662957 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-k2svc"] Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.666108 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:32 crc kubenswrapper[4619]: E0126 10:57:32.666525 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:33.166511108 +0000 UTC m=+152.200551824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.667491 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-xr254" podStartSLOduration=7.667450233 podStartE2EDuration="7.667450233s" podCreationTimestamp="2026-01-26 10:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:32.656262493 +0000 UTC m=+151.690303209" watchObservedRunningTime="2026-01-26 10:57:32.667450233 +0000 UTC m=+151.701490949" Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.712535 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5mlf9"] Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.717117 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-llkgb" event={"ID":"1749757b-13ec-4692-8194-0816220d378c","Type":"ContainerStarted","Data":"7bb4a080061d2d902db55ff8fa40a90db54cb61bfe6256c5be3ed780a57fdf10"} Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.756258 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8k974" event={"ID":"122bf0d5-ee1a-417a-ae58-ced70bfe2abf","Type":"ContainerStarted","Data":"964e4a05fd870cc0fa724c35b2aceac301b0ca042facab3832da6ef6f71bda37"} Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.768411 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.768881 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kb2jr"] Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.769196 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-kq9qd" event={"ID":"554090f8-56c3-48d5-ab2e-082c8038b972","Type":"ContainerStarted","Data":"4ca63451e01fc10a51c142f56f2223247ee3efd914990f8f0e29982bd1773e9f"} Jan 26 10:57:32 crc kubenswrapper[4619]: E0126 10:57:32.769344 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:33.269316633 +0000 UTC m=+152.303357349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.795162 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pp656" podStartSLOduration=129.795140073 podStartE2EDuration="2m9.795140073s" podCreationTimestamp="2026-01-26 10:55:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:32.75684326 +0000 UTC m=+151.790883976" watchObservedRunningTime="2026-01-26 10:57:32.795140073 +0000 UTC m=+151.829180779" Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.812081 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l54p9"] Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.824560 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6xg9r"] Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.832812 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-98w5w"] Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.838279 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pn8z7"] Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.858427 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hk46v" event={"ID":"0bff9a47-4685-457d-8a24-6139113cdbd8","Type":"ContainerStarted","Data":"0d66f2efd444627eefd673dd8df1d60fb7ab36bb9d7c4a21af07638a1ad36699"} Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.873574 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:32 crc kubenswrapper[4619]: E0126 10:57:32.875184 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:33.375162408 +0000 UTC m=+152.409203124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.908998 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fqdl" event={"ID":"0026923c-f271-4a2c-864b-71f051a5f093","Type":"ContainerStarted","Data":"0d6707c1254b7ff2a802e7443c896f8e5fe6e3b22fcc9866ef59c46a81f926e8"} Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.911230 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-hk46v" podStartSLOduration=128.911216313 podStartE2EDuration="2m8.911216313s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:32.903853282 +0000 UTC m=+151.937893998" watchObservedRunningTime="2026-01-26 10:57:32.911216313 +0000 UTC m=+151.945257029" Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.939342 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jdtp" event={"ID":"4e363f08-f4c4-4bdd-ac61-18f22f0ccc3d","Type":"ContainerStarted","Data":"1f377eb86a55235bac056fbfbd67e86ee7ae235485f5de7eee62c34db81c9c91"} Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.958532 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsq5c" event={"ID":"a6ca7682-4544-4745-a71c-0f5b051d1f59","Type":"ContainerStarted","Data":"60a85e49d2e3d49bccc4a10919452811be2ebc80d19fe4abaf1bc85b0c544230"} Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.974317 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:32 crc kubenswrapper[4619]: E0126 10:57:32.974865 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:33.474834083 +0000 UTC m=+152.508874799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.977294 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-t2xxs"] Jan 26 10:57:32 crc kubenswrapper[4619]: I0126 10:57:32.988733 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xg2bk"] Jan 26 10:57:33 crc kubenswrapper[4619]: I0126 10:57:33.006508 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-t67nv"] Jan 26 10:57:33 crc kubenswrapper[4619]: I0126 10:57:33.006573 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jtkbh"] Jan 26 10:57:33 crc kubenswrapper[4619]: I0126 10:57:33.038912 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-twm6j"] Jan 26 10:57:33 crc kubenswrapper[4619]: I0126 10:57:33.039174 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490405-2tm2l" podStartSLOduration=130.039048198 podStartE2EDuration="2m10.039048198s" podCreationTimestamp="2026-01-26 10:55:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:33.034336466 +0000 UTC m=+152.068377182" watchObservedRunningTime="2026-01-26 10:57:33.039048198 +0000 UTC m=+152.073088914" Jan 26 10:57:33 crc kubenswrapper[4619]: I0126 10:57:33.088779 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:33 crc kubenswrapper[4619]: I0126 10:57:33.088791 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-zdjpz" podStartSLOduration=129.088766086 podStartE2EDuration="2m9.088766086s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:33.087532115 +0000 UTC m=+152.121572851" watchObservedRunningTime="2026-01-26 10:57:33.088766086 +0000 UTC m=+152.122806802" Jan 26 10:57:33 crc kubenswrapper[4619]: E0126 10:57:33.089713 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:33.589691781 +0000 UTC m=+152.623732497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:33 crc kubenswrapper[4619]: I0126 10:57:33.195465 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:33 crc kubenswrapper[4619]: E0126 10:57:33.197038 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:33.697013914 +0000 UTC m=+152.731054630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:33 crc kubenswrapper[4619]: I0126 10:57:33.297481 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:33 crc kubenswrapper[4619]: E0126 10:57:33.298252 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:33.798236598 +0000 UTC m=+152.832277314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:33 crc kubenswrapper[4619]: I0126 10:57:33.400145 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:33 crc kubenswrapper[4619]: E0126 10:57:33.400566 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:33.900546031 +0000 UTC m=+152.934586747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:33 crc kubenswrapper[4619]: I0126 10:57:33.510674 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:33 crc kubenswrapper[4619]: E0126 10:57:33.511142 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:34.011119828 +0000 UTC m=+153.045160544 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:33 crc kubenswrapper[4619]: I0126 10:57:33.612753 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:33 crc kubenswrapper[4619]: E0126 10:57:33.612965 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:34.112926818 +0000 UTC m=+153.146967524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:33 crc kubenswrapper[4619]: I0126 10:57:33.613340 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:33 crc kubenswrapper[4619]: E0126 10:57:33.613809 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:34.11379684 +0000 UTC m=+153.147837546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:33 crc kubenswrapper[4619]: I0126 10:57:33.714011 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:33 crc kubenswrapper[4619]: E0126 10:57:33.714304 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:34.214268046 +0000 UTC m=+153.248308762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:33 crc kubenswrapper[4619]: I0126 10:57:33.714401 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:33 crc kubenswrapper[4619]: E0126 10:57:33.714893 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:34.214884832 +0000 UTC m=+153.248925548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:33 crc kubenswrapper[4619]: I0126 10:57:33.819046 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:33 crc kubenswrapper[4619]: E0126 10:57:33.819225 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:34.319193256 +0000 UTC m=+153.353233972 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:33 crc kubenswrapper[4619]: I0126 10:57:33.819279 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:33 crc kubenswrapper[4619]: E0126 10:57:33.819657 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:34.319641258 +0000 UTC m=+153.353681974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:33 crc kubenswrapper[4619]: I0126 10:57:33.920546 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:33 crc kubenswrapper[4619]: E0126 10:57:33.920897 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:34.420818662 +0000 UTC m=+153.454859388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.022643 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:34 crc kubenswrapper[4619]: E0126 10:57:34.023401 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:34.523374141 +0000 UTC m=+153.557414857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.108426 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-shltb" event={"ID":"c7a17032-bac8-4534-8671-b0e660c7c685","Type":"ContainerStarted","Data":"d1689d74e4e8e43aa537064cbb3c3de31ca43b801460edfd2eb22128379ee09b"} Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.120394 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zdjpz" event={"ID":"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa","Type":"ContainerStarted","Data":"3840790430e0acf2a0467d52fd8c2c4799b4d866077317a3d6c45782be1a6a55"} Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.124815 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:34 crc kubenswrapper[4619]: E0126 10:57:34.125170 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:34.62515582 +0000 UTC m=+153.659196536 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.189205 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8qrcl" event={"ID":"7468d5f2-6ad2-480c-bad5-4bc64f6cbaab","Type":"ContainerStarted","Data":"ab1855b7a3ee3cefb892e8e4597ad521882b58fe570d04c36c8984a901220f92"} Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.237471 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4qxb4" event={"ID":"c0f3177c-80c0-4074-adef-91e20ece02b8","Type":"ContainerStarted","Data":"098f4cf0dec658701d8db141c5bf0e52ef25fea7545ee79d793b5b2ce1772e1c"} Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.238299 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:34 crc kubenswrapper[4619]: E0126 10:57:34.240341 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:34.740327647 +0000 UTC m=+153.774368363 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.266032 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jdtp" event={"ID":"4e363f08-f4c4-4bdd-ac61-18f22f0ccc3d","Type":"ContainerStarted","Data":"6ddebd802e2e6f1b8324e345941dcddfa79da53465fb4c4ce5c4f1f25384dd04"} Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.281474 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jdtp" Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.287366 4619 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6jdtp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.287445 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jdtp" podUID="4e363f08-f4c4-4bdd-ac61-18f22f0ccc3d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.330173 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8hpj" event={"ID":"37ac2fdc-a31f-4bb2-91cf-962b10add71c","Type":"ContainerStarted","Data":"0be728ca43c8e13d9644cb12357e67cced66071f505d139049aab3d85004d4dd"} Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.339395 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:34 crc kubenswrapper[4619]: E0126 10:57:34.360132 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:34.860103362 +0000 UTC m=+153.894144078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.412134 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twm6j" event={"ID":"b62915f7-c2aa-468c-9cba-05ee4712a811","Type":"ContainerStarted","Data":"8d012a615ff6c7a6a3e8a71633b2ce6571cf07824df11205555a3fbf29399671"} Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.440030 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ftrqq" event={"ID":"424db312-75a1-4a4e-954d-4100001cb1ef","Type":"ContainerStarted","Data":"cf5f28a6e7e13f2935b2ff3e1b140250e1e1a4b41f0d59f42f038119b5e3531c"} Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.440099 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ftrqq" event={"ID":"424db312-75a1-4a4e-954d-4100001cb1ef","Type":"ContainerStarted","Data":"1430466fbc54bb54093af456005f61a8e454374eaf2d357b9a37e2bdb3d53537"} Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.442513 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:34 crc kubenswrapper[4619]: E0126 10:57:34.449519 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:34.94950573 +0000 UTC m=+153.983546446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.451001 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8qrcl" podStartSLOduration=130.450990068 podStartE2EDuration="2m10.450990068s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:34.266769582 +0000 UTC m=+153.300810298" watchObservedRunningTime="2026-01-26 10:57:34.450990068 +0000 UTC m=+153.485030784" Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.486078 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4qxb4" podStartSLOduration=130.486054458 podStartE2EDuration="2m10.486054458s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:34.441594565 +0000 UTC m=+153.475635281" watchObservedRunningTime="2026-01-26 10:57:34.486054458 +0000 UTC m=+153.520095164" Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.515826 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jtkbh" event={"ID":"3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f","Type":"ContainerStarted","Data":"2ab2f2d08fcf838db896c9b151d5c9b009b850289037fb66989d7e094ccf198b"} Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.533194 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8hpj" podStartSLOduration=131.5331723 podStartE2EDuration="2m11.5331723s" podCreationTimestamp="2026-01-26 10:55:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:34.487995479 +0000 UTC m=+153.522036195" watchObservedRunningTime="2026-01-26 10:57:34.5331723 +0000 UTC m=+153.567213016" Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.545185 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:34 crc kubenswrapper[4619]: E0126 10:57:34.546769 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:35.046751402 +0000 UTC m=+154.080792118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.556401 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-k2svc" event={"ID":"73283b30-a00c-44f0-95a1-48257ac6ae48","Type":"ContainerStarted","Data":"b190475b7cf641b399e873b991176f261a2c69fef5daf9c9d5a16be36d883f42"} Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.559205 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsq5c" Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.560224 4619 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qsq5c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.560252 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsq5c" podUID="a6ca7682-4544-4745-a71c-0f5b051d1f59" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.569047 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jdtp" podStartSLOduration=130.56903633 podStartE2EDuration="2m10.56903633s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:34.535809148 +0000 UTC m=+153.569849854" watchObservedRunningTime="2026-01-26 10:57:34.56903633 +0000 UTC m=+153.603077046" Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.586822 4619 generic.go:334] "Generic (PLEG): container finished" podID="1749757b-13ec-4692-8194-0816220d378c" containerID="7bb4a080061d2d902db55ff8fa40a90db54cb61bfe6256c5be3ed780a57fdf10" exitCode=0 Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.586883 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-llkgb" event={"ID":"1749757b-13ec-4692-8194-0816220d378c","Type":"ContainerDied","Data":"7bb4a080061d2d902db55ff8fa40a90db54cb61bfe6256c5be3ed780a57fdf10"} Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.641896 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-kq9qd" event={"ID":"554090f8-56c3-48d5-ab2e-082c8038b972","Type":"ContainerStarted","Data":"a7c7971f24e8a6f631920c8cfa0987227180ce5804e2f8175c5c1bfd4f6ae196"} Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.646379 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:34 crc kubenswrapper[4619]: E0126 10:57:34.647012 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:35.146999221 +0000 UTC m=+154.181039937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.662346 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-xr254" event={"ID":"2b01dc83-f1cc-447a-9b3c-3b9762465220","Type":"ContainerStarted","Data":"51b8c47d4d488038d5a294001b2b0e0624fad7fb61de51e927bd637c409fbf67"} Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.672088 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-k2svc" podStartSLOduration=130.672074791 podStartE2EDuration="2m10.672074791s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:34.663188761 +0000 UTC m=+153.697229477" watchObservedRunningTime="2026-01-26 10:57:34.672074791 +0000 UTC m=+153.706115507" Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.672337 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ftrqq" podStartSLOduration=130.672333688 podStartE2EDuration="2m10.672333688s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:34.585663411 +0000 UTC m=+153.619704137" watchObservedRunningTime="2026-01-26 10:57:34.672333688 +0000 UTC m=+153.706374404" Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.681482 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l54p9" event={"ID":"696254c7-95ab-43d9-9919-5d1146eec08e","Type":"ContainerStarted","Data":"6af8bf9e188ad41f2481da23b52493eb87db9de2511da5fa3bea71c5d896aea2"} Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.681519 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l54p9" event={"ID":"696254c7-95ab-43d9-9919-5d1146eec08e","Type":"ContainerStarted","Data":"aa9f4d8fb1b0f4ddfea40b3ae62a08dc636f3f074b19b957e5f57254c39d5a2b"} Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.682741 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t67nv" event={"ID":"7fd097cd-fa31-45f4-bd1d-f925d43f05cc","Type":"ContainerStarted","Data":"c631d180b0da3c1142ad22fe2fa6210a58499940d4a96e5364e200d2a6305b0d"} Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.684038 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" event={"ID":"20ef7efa-6ea4-45aa-b18b-af795f8d0758","Type":"ContainerStarted","Data":"3b90871cff31e327a2d5c8bf84e320080e6e4aa0a5bbefd8124ae3971931ec7f"} Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.712879 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vpbzh" event={"ID":"6431707b-d1e3-449c-a20a-2b4906880ad2","Type":"ContainerStarted","Data":"3db906e58345192b6edca6c231b2f7010813b34a6949fcc5483477953ab03df8"} Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.702580 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsq5c" podStartSLOduration=130.702557492 podStartE2EDuration="2m10.702557492s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:34.697929992 +0000 UTC m=+153.731970708" watchObservedRunningTime="2026-01-26 10:57:34.702557492 +0000 UTC m=+153.736598208" Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.751269 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-98w5w" event={"ID":"28521f3a-f67f-461b-b56d-d8d023bebde2","Type":"ContainerStarted","Data":"5f49d9ae39713821028f94f210e73f5f86b2c270cc4e46db5738cde6013960c5"} Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.753194 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:34 crc kubenswrapper[4619]: E0126 10:57:34.753320 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:35.253302547 +0000 UTC m=+154.287343263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.753741 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:34 crc kubenswrapper[4619]: E0126 10:57:34.757009 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:35.256999134 +0000 UTC m=+154.291039850 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.773332 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kb2jr" event={"ID":"7e21dce7-f265-4965-b02a-64ca62ff71b4","Type":"ContainerStarted","Data":"143d73e3c033b13230390eaf9f42a7abdef1f04ad6113984a2281dcf414d7978"} Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.782678 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-th5wb" event={"ID":"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689","Type":"ContainerStarted","Data":"f9cf022b4c08c652efe5542fb8b0d8855f10958ea7455b305a0db37357246a62"} Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.782814 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" podStartSLOduration=130.782790572 podStartE2EDuration="2m10.782790572s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:34.780122573 +0000 UTC m=+153.814163289" watchObservedRunningTime="2026-01-26 10:57:34.782790572 +0000 UTC m=+153.816831288" Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.796439 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6xg9r" event={"ID":"27a11a49-1783-4016-a84f-25b5c4eb9584","Type":"ContainerStarted","Data":"dcc504e34cf5bdf12e9c3acbd6403937a12639c13db32472ce54f28fd5bd183a"} Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.857651 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:34 crc kubenswrapper[4619]: E0126 10:57:34.859531 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:35.359509861 +0000 UTC m=+154.393550577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.870827 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vv9kd" event={"ID":"ca8acf28-fab4-4031-8106-3a1a99f7717f","Type":"ContainerStarted","Data":"546dc21bc3b0e9a666d51d519a3ad9572c2780c2b3ca019b08f68e9d93ac0127"} Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.898789 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-snrcm" event={"ID":"3865e56a-bf16-4cd0-b7c1-f74e98381c5b","Type":"ContainerStarted","Data":"dd3d0fc458e66375d6cc380aeed9e4d4e71f6d240e69b48f1735af48a4673a46"} Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.927639 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490405-2tm2l" event={"ID":"332f48dc-fb71-4e6e-bb33-c416af7b743b","Type":"ContainerStarted","Data":"51c15111aa612990cbfbdb938357cb97074a83a129a8f8f99d2a6242f45f94c2"} Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.939037 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l54p9" podStartSLOduration=130.939016733 podStartE2EDuration="2m10.939016733s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:34.853107586 +0000 UTC m=+153.887148302" watchObservedRunningTime="2026-01-26 10:57:34.939016733 +0000 UTC m=+153.973057449" Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.967464 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8k974" event={"ID":"122bf0d5-ee1a-417a-ae58-ced70bfe2abf","Type":"ContainerStarted","Data":"15cd1044109c039b3ccdb1c7a0d2b5a6b071d18b03c1ff3199b0918a88eda770"} Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.968200 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:34 crc kubenswrapper[4619]: E0126 10:57:34.969794 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:35.46975996 +0000 UTC m=+154.503800676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:34 crc kubenswrapper[4619]: I0126 10:57:34.982725 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g6cvw" event={"ID":"0e239a69-8537-46b7-a0e0-30d8382b4e22","Type":"ContainerStarted","Data":"d756841e24faf998758cddd86cd0f0e2f1e03c10cf7ebf3d83d597aa63c2b777"} Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.033123 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-kq9qd" podStartSLOduration=131.033097122 podStartE2EDuration="2m11.033097122s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:34.967065851 +0000 UTC m=+154.001106567" watchObservedRunningTime="2026-01-26 10:57:35.033097122 +0000 UTC m=+154.067137838" Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.034719 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-snrcm" podStartSLOduration=131.034713124 podStartE2EDuration="2m11.034713124s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:35.017382005 +0000 UTC m=+154.051422711" watchObservedRunningTime="2026-01-26 10:57:35.034713124 +0000 UTC m=+154.068753840" Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.064529 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xg2bk" event={"ID":"da51f450-5541-4a98-85e4-f9ce8e81fc1a","Type":"ContainerStarted","Data":"013af7709822cfd104045462961fc8c18e89b146c194822462e0f8c30f014dec"} Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.065663 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xg2bk" Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.066995 4619 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-xg2bk container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.067026 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xg2bk" podUID="da51f450-5541-4a98-85e4-f9ce8e81fc1a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.069294 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:35 crc kubenswrapper[4619]: E0126 10:57:35.071379 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:35.571354324 +0000 UTC m=+154.605395040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.106010 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pn8z7" event={"ID":"87b07f24-92c0-4190-a140-6029e82f826d","Type":"ContainerStarted","Data":"d4c3767e98af82085b2d52fb095b9e17826f51456a95bae8d66fcd80dd0007c6"} Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.107154 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-pn8z7" Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.108360 4619 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pn8z7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.108401 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pn8z7" podUID="87b07f24-92c0-4190-a140-6029e82f826d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.137171 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-vpbzh" podStartSLOduration=131.13714521 podStartE2EDuration="2m11.13714521s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:35.135373644 +0000 UTC m=+154.169414360" watchObservedRunningTime="2026-01-26 10:57:35.13714521 +0000 UTC m=+154.171185926" Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.138382 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vv9kd" podStartSLOduration=131.138377802 podStartE2EDuration="2m11.138377802s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:35.065871993 +0000 UTC m=+154.099912709" watchObservedRunningTime="2026-01-26 10:57:35.138377802 +0000 UTC m=+154.172418518" Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.151808 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-snrcm" Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.162417 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fqdl" event={"ID":"0026923c-f271-4a2c-864b-71f051a5f093","Type":"ContainerStarted","Data":"eb115a326beee57ef2f6a43c1fb8c744a5fba7b621cc7aad43e6059c2e941d3c"} Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.165071 4619 patch_prober.go:28] interesting pod/router-default-5444994796-snrcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 10:57:35 crc kubenswrapper[4619]: [-]has-synced failed: reason withheld Jan 26 10:57:35 crc kubenswrapper[4619]: [+]process-running ok Jan 26 10:57:35 crc kubenswrapper[4619]: healthz check failed Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.165141 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snrcm" podUID="3865e56a-bf16-4cd0-b7c1-f74e98381c5b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.171355 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:35 crc kubenswrapper[4619]: E0126 10:57:35.172199 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:35.672170438 +0000 UTC m=+154.706211154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.223795 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-t2xxs" event={"ID":"01f7dbf9-8d49-4dca-9285-221957d8df27","Type":"ContainerStarted","Data":"56606706d1414cbab5b8942dfb20730d0860713c229df941eea5fa57e74409fc"} Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.223901 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-98w5w" podStartSLOduration=131.223882299 podStartE2EDuration="2m11.223882299s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:35.223066719 +0000 UTC m=+154.257107435" watchObservedRunningTime="2026-01-26 10:57:35.223882299 +0000 UTC m=+154.257923015" Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.273706 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:35 crc kubenswrapper[4619]: E0126 10:57:35.275026 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:35.775006015 +0000 UTC m=+154.809046731 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.297817 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z2hch" event={"ID":"08d7d14b-8c4c-4bb6-99d4-71618136533b","Type":"ContainerStarted","Data":"56f5c2d3e8b6b5da9f10656a02952a275bd1c714e846be5799417349663bded7"} Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.297868 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z2hch" event={"ID":"08d7d14b-8c4c-4bb6-99d4-71618136533b","Type":"ContainerStarted","Data":"13d67663ced0f23d9c13081485c9bfba0ea6b5f199f5ec1faa5687e423a5871a"} Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.312708 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5mlf9" event={"ID":"9a275d95-f049-409f-aa77-8ed837840461","Type":"ContainerStarted","Data":"69e7d5ac05483ceafa1d54b7ed58332a498d780699fd1e11243c592ff27f665c"} Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.315067 4619 patch_prober.go:28] interesting pod/downloads-7954f5f757-k4prd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.315152 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k4prd" podUID="97a177f5-24c5-4f7e-9bc6-8c234fe0cf19" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.374883 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g6cvw" podStartSLOduration=131.374855584 podStartE2EDuration="2m11.374855584s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:35.29524756 +0000 UTC m=+154.329288276" watchObservedRunningTime="2026-01-26 10:57:35.374855584 +0000 UTC m=+154.408896300" Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.375333 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-pn8z7" podStartSLOduration=131.375323717 podStartE2EDuration="2m11.375323717s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:35.354860046 +0000 UTC m=+154.388900782" watchObservedRunningTime="2026-01-26 10:57:35.375323717 +0000 UTC m=+154.409364433" Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.376304 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:35 crc kubenswrapper[4619]: E0126 10:57:35.388349 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:35.888329574 +0000 UTC m=+154.922370290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.491545 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:35 crc kubenswrapper[4619]: E0126 10:57:35.493115 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:35.993097211 +0000 UTC m=+155.027137927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.493362 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xg2bk" podStartSLOduration=131.493341587 podStartE2EDuration="2m11.493341587s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:35.490737239 +0000 UTC m=+154.524777955" watchObservedRunningTime="2026-01-26 10:57:35.493341587 +0000 UTC m=+154.527382303" Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.516973 4619 csr.go:261] certificate signing request csr-lk5j5 is approved, waiting to be issued Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.593690 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:35 crc kubenswrapper[4619]: E0126 10:57:35.594160 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:36.09413976 +0000 UTC m=+155.128180476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.653138 4619 csr.go:257] certificate signing request csr-lk5j5 is issued Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.694924 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:35 crc kubenswrapper[4619]: E0126 10:57:35.695138 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:36.195103568 +0000 UTC m=+155.229144284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.695198 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:35 crc kubenswrapper[4619]: E0126 10:57:35.695548 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:36.195531859 +0000 UTC m=+155.229572575 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.753353 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fqdl" podStartSLOduration=131.753317758 podStartE2EDuration="2m11.753317758s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:35.610502135 +0000 UTC m=+154.644542851" watchObservedRunningTime="2026-01-26 10:57:35.753317758 +0000 UTC m=+154.787358474" Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.755828 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z2hch" podStartSLOduration=131.755816202 podStartE2EDuration="2m11.755816202s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:35.752915157 +0000 UTC m=+154.786955873" watchObservedRunningTime="2026-01-26 10:57:35.755816202 +0000 UTC m=+154.789856918" Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.797151 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:35 crc kubenswrapper[4619]: E0126 10:57:35.797378 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:36.297342059 +0000 UTC m=+155.331382765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.798142 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:35 crc kubenswrapper[4619]: E0126 10:57:35.798547 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:36.298539061 +0000 UTC m=+155.332579777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.868736 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-t2xxs" podStartSLOduration=10.86871433 podStartE2EDuration="10.86871433s" podCreationTimestamp="2026-01-26 10:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:35.86641606 +0000 UTC m=+154.900456776" watchObservedRunningTime="2026-01-26 10:57:35.86871433 +0000 UTC m=+154.902755046" Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.899289 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:35 crc kubenswrapper[4619]: E0126 10:57:35.899501 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:36.399462027 +0000 UTC m=+155.433502743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:35 crc kubenswrapper[4619]: I0126 10:57:35.899738 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:35 crc kubenswrapper[4619]: E0126 10:57:35.900157 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:36.400148665 +0000 UTC m=+155.434189381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.001127 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:36 crc kubenswrapper[4619]: E0126 10:57:36.001369 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:36.501334329 +0000 UTC m=+155.535375045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.001433 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:36 crc kubenswrapper[4619]: E0126 10:57:36.001853 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:36.501843302 +0000 UTC m=+155.535884018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.103054 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:36 crc kubenswrapper[4619]: E0126 10:57:36.103370 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:36.603308492 +0000 UTC m=+155.637349208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.103535 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:36 crc kubenswrapper[4619]: E0126 10:57:36.103954 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:36.603935619 +0000 UTC m=+155.637976335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.152364 4619 patch_prober.go:28] interesting pod/router-default-5444994796-snrcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 10:57:36 crc kubenswrapper[4619]: [-]has-synced failed: reason withheld Jan 26 10:57:36 crc kubenswrapper[4619]: [+]process-running ok Jan 26 10:57:36 crc kubenswrapper[4619]: healthz check failed Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.152458 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snrcm" podUID="3865e56a-bf16-4cd0-b7c1-f74e98381c5b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.204330 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:36 crc kubenswrapper[4619]: E0126 10:57:36.204572 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:36.704535848 +0000 UTC m=+155.738576564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.204691 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:36 crc kubenswrapper[4619]: E0126 10:57:36.205029 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:36.70501912 +0000 UTC m=+155.739059836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.305230 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:36 crc kubenswrapper[4619]: E0126 10:57:36.305458 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:36.805421614 +0000 UTC m=+155.839462330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.305530 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:36 crc kubenswrapper[4619]: E0126 10:57:36.305906 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:36.805891846 +0000 UTC m=+155.839932562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.308252 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-98w5w" event={"ID":"28521f3a-f67f-461b-b56d-d8d023bebde2","Type":"ContainerStarted","Data":"977360c145d97d907d8ce16be89f8c1931523d930e137b1368490db447481d85"} Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.310151 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pn8z7" event={"ID":"87b07f24-92c0-4190-a140-6029e82f826d","Type":"ContainerStarted","Data":"ce4c36d437cf45801a08ec73cbf0308c61fc5c6863a67c44e647de5c0acd2a4e"} Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.310947 4619 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pn8z7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.310998 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pn8z7" podUID="87b07f24-92c0-4190-a140-6029e82f826d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.312363 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ff7zj" event={"ID":"679cfe9e-496a-4d70-bfdf-7401ea224c8b","Type":"ContainerStarted","Data":"e4dbc1ed19a8f7d9a081761ee6c95a3f79beb87986ce1dc5b1a7adb4134af779"} Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.314427 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fqdl" event={"ID":"0026923c-f271-4a2c-864b-71f051a5f093","Type":"ContainerStarted","Data":"1e3a432ad89b611a2258468603e0236a4340f116e9d367546f332f371ca3da1d"} Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.316679 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-shltb" event={"ID":"c7a17032-bac8-4534-8671-b0e660c7c685","Type":"ContainerStarted","Data":"8fb9acf2c2a2d2cd57b953221f68472878f864e65c697d7171ae9d676308fae1"} Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.316796 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-shltb" Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.318539 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kb2jr" event={"ID":"7e21dce7-f265-4965-b02a-64ca62ff71b4","Type":"ContainerStarted","Data":"83273cc7d81519f011a603cf8ff02d0f74257cae84e094a2e1d35401ac034603"} Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.318570 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kb2jr" event={"ID":"7e21dce7-f265-4965-b02a-64ca62ff71b4","Type":"ContainerStarted","Data":"5f5ddcf14af97300ea684cdf4a1a43c1b4cd763f1a9d4f73a1874ca3ea74f157"} Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.320300 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t67nv" event={"ID":"7fd097cd-fa31-45f4-bd1d-f925d43f05cc","Type":"ContainerStarted","Data":"6271142523d7492f069d15f91095385c6a695b7ae7c8008eec442169a9215b82"} Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.320330 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t67nv" event={"ID":"7fd097cd-fa31-45f4-bd1d-f925d43f05cc","Type":"ContainerStarted","Data":"280acbbf2cc03ceb11522afc0aa00ccfd34c5881905682ae368c9becf1bb04d1"} Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.321678 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6xg9r" event={"ID":"27a11a49-1783-4016-a84f-25b5c4eb9584","Type":"ContainerStarted","Data":"d5d13cb54a50cb56664e62143969977d13800045aa7f0b8895e07fab31151bb2"} Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.321840 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-6xg9r" Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.323505 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xg2bk" event={"ID":"da51f450-5541-4a98-85e4-f9ce8e81fc1a","Type":"ContainerStarted","Data":"b8c9a4e19377d12d4eb951f1b306cfa767710320453a865a1b21536ef40b6e46"} Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.324847 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-t2xxs" event={"ID":"01f7dbf9-8d49-4dca-9285-221957d8df27","Type":"ContainerStarted","Data":"678831503a95f558911e56e812814292690bcf04d54426eefff6c9c53c30be29"} Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.327269 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-llkgb" event={"ID":"1749757b-13ec-4692-8194-0816220d378c","Type":"ContainerStarted","Data":"dfbff6bcc3b82a829d9d8e135aa3a3fa35458419f534401bdedc9fe7acde7ce6"} Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.327425 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-llkgb" Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.328481 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5mlf9" event={"ID":"9a275d95-f049-409f-aa77-8ed837840461","Type":"ContainerStarted","Data":"f1f953d77e6902eff2bc9d9628865f9b074ceb057af12ff77a3e945f9fd7e493"} Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.329826 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-k2svc" event={"ID":"73283b30-a00c-44f0-95a1-48257ac6ae48","Type":"ContainerStarted","Data":"297de06973591d41f1f4b2610a123eef8ab7128a2fe057b9759f5d9058dd25c9"} Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.331539 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsq5c" event={"ID":"a6ca7682-4544-4745-a71c-0f5b051d1f59","Type":"ContainerStarted","Data":"d27ba3580a1143b5c0e086182071789d1bd4e3d43ee96f3014c3a0d6fb7dbca0"} Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.335781 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8k974" event={"ID":"122bf0d5-ee1a-417a-ae58-ced70bfe2abf","Type":"ContainerStarted","Data":"a2e54a57d7118ddce7ea9a2446a6c6b82d1ecccf6257aa022956e2b3c327cbdb"} Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.335938 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-8k974" Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.341979 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vpbzh" event={"ID":"6431707b-d1e3-449c-a20a-2b4906880ad2","Type":"ContainerStarted","Data":"86e625d8668d896e67b89e9c951dc57380333a787f3a055c24de804f8991b9fa"} Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.344805 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twm6j" event={"ID":"b62915f7-c2aa-468c-9cba-05ee4712a811","Type":"ContainerStarted","Data":"1e92bc264d33290f86581506000ead21b547082d514c818753965fd8e1e7402c"} Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.344876 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twm6j" event={"ID":"b62915f7-c2aa-468c-9cba-05ee4712a811","Type":"ContainerStarted","Data":"ef5376a22ad6bf11fb88aabf4e53b25d3b3dd41f076f2185fdb4820d065d2461"} Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.347245 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-th5wb" event={"ID":"1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689","Type":"ContainerStarted","Data":"48dae56cd9ca735a70b67b49996009495296c635bf8f4365701270e4f6342059"} Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.347356 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsq5c" Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.348786 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jtkbh" event={"ID":"3e7bb9cb-a297-4c7f-a6d7-ab0792208a2f","Type":"ContainerStarted","Data":"c60d01fd22a2225ab935cd2afaee018376bc054d23d9c831172fc4930b70d5c1"} Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.356921 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xg2bk" Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.407341 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:36 crc kubenswrapper[4619]: E0126 10:57:36.409406 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:36.909372619 +0000 UTC m=+155.943413335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.467603 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5mlf9" podStartSLOduration=132.467576709 podStartE2EDuration="2m12.467576709s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:36.017388905 +0000 UTC m=+155.051429791" watchObservedRunningTime="2026-01-26 10:57:36.467576709 +0000 UTC m=+155.501617425" Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.468163 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-shltb" podStartSLOduration=132.468158103 podStartE2EDuration="2m12.468158103s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:36.46068393 +0000 UTC m=+155.494724646" watchObservedRunningTime="2026-01-26 10:57:36.468158103 +0000 UTC m=+155.502198819" Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.514765 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:36 crc kubenswrapper[4619]: E0126 10:57:36.515431 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:37.015409009 +0000 UTC m=+156.049449725 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.616476 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:36 crc kubenswrapper[4619]: E0126 10:57:36.616967 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:37.116951241 +0000 UTC m=+156.150991947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.662982 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-26 10:52:35 +0000 UTC, rotation deadline is 2026-11-10 23:38:54.31910491 +0000 UTC Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.663039 4619 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6924h41m17.656069744s for next certificate rotation Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.721604 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:36 crc kubenswrapper[4619]: E0126 10:57:36.722037 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:37.222023916 +0000 UTC m=+156.256064632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.754021 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-6xg9r" podStartSLOduration=132.753996475 podStartE2EDuration="2m12.753996475s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:36.753725628 +0000 UTC m=+155.787766344" watchObservedRunningTime="2026-01-26 10:57:36.753996475 +0000 UTC m=+155.788037181" Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.823557 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:36 crc kubenswrapper[4619]: E0126 10:57:36.823915 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:37.323899078 +0000 UTC m=+156.357939794 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.839425 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twm6j" podStartSLOduration=132.83940659 podStartE2EDuration="2m12.83940659s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:36.817875952 +0000 UTC m=+155.851916668" watchObservedRunningTime="2026-01-26 10:57:36.83940659 +0000 UTC m=+155.873447306" Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.876867 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-8k974" podStartSLOduration=11.876843871 podStartE2EDuration="11.876843871s" podCreationTimestamp="2026-01-26 10:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:36.875179368 +0000 UTC m=+155.909220084" watchObservedRunningTime="2026-01-26 10:57:36.876843871 +0000 UTC m=+155.910884577" Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.911433 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6jdtp" Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.926803 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:36 crc kubenswrapper[4619]: E0126 10:57:36.927177 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:37.427165686 +0000 UTC m=+156.461206392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.974107 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t67nv" podStartSLOduration=132.97402878 podStartE2EDuration="2m12.97402878s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:36.972222024 +0000 UTC m=+156.006262740" watchObservedRunningTime="2026-01-26 10:57:36.97402878 +0000 UTC m=+156.008069496" Jan 26 10:57:36 crc kubenswrapper[4619]: I0126 10:57:36.975337 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-kb2jr" podStartSLOduration=132.975330074 podStartE2EDuration="2m12.975330074s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:36.925945384 +0000 UTC m=+155.959986100" watchObservedRunningTime="2026-01-26 10:57:36.975330074 +0000 UTC m=+156.009370790" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.028629 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:37 crc kubenswrapper[4619]: E0126 10:57:37.028844 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:37.52880691 +0000 UTC m=+156.562847626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.029363 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:37 crc kubenswrapper[4619]: E0126 10:57:37.029825 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:37.529809107 +0000 UTC m=+156.563849823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.130001 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:37 crc kubenswrapper[4619]: E0126 10:57:37.130200 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:37.630169889 +0000 UTC m=+156.664210605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.130362 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:37 crc kubenswrapper[4619]: E0126 10:57:37.130751 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:37.630735964 +0000 UTC m=+156.664776680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.133405 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-th5wb" podStartSLOduration=134.133392943 podStartE2EDuration="2m14.133392943s" podCreationTimestamp="2026-01-26 10:55:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:37.048332377 +0000 UTC m=+156.082373093" watchObservedRunningTime="2026-01-26 10:57:37.133392943 +0000 UTC m=+156.167433659" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.151406 4619 patch_prober.go:28] interesting pod/router-default-5444994796-snrcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 10:57:37 crc kubenswrapper[4619]: [-]has-synced failed: reason withheld Jan 26 10:57:37 crc kubenswrapper[4619]: [+]process-running ok Jan 26 10:57:37 crc kubenswrapper[4619]: healthz check failed Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.151482 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snrcm" podUID="3865e56a-bf16-4cd0-b7c1-f74e98381c5b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.190149 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-jtkbh" podStartSLOduration=134.190120714 podStartE2EDuration="2m14.190120714s" podCreationTimestamp="2026-01-26 10:55:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:37.188600155 +0000 UTC m=+156.222640861" watchObservedRunningTime="2026-01-26 10:57:37.190120714 +0000 UTC m=+156.224161430" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.190657 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-llkgb" podStartSLOduration=133.190651577 podStartE2EDuration="2m13.190651577s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:37.13287864 +0000 UTC m=+156.166919356" watchObservedRunningTime="2026-01-26 10:57:37.190651577 +0000 UTC m=+156.224692313" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.231104 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:37 crc kubenswrapper[4619]: E0126 10:57:37.231579 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:37.731561658 +0000 UTC m=+156.765602374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.322944 4619 patch_prober.go:28] interesting pod/console-operator-58897d9998-6xg9r container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.323055 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6xg9r" podUID="27a11a49-1783-4016-a84f-25b5c4eb9584" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.332448 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:37 crc kubenswrapper[4619]: E0126 10:57:37.332926 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:37.832912567 +0000 UTC m=+156.866953283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.361936 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ff7zj" event={"ID":"679cfe9e-496a-4d70-bfdf-7401ea224c8b","Type":"ContainerStarted","Data":"687e8439d628bc968ce9e271ffc54c90a597b4b559ce6ff9ffe989f8608ba826"} Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.362003 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ff7zj" event={"ID":"679cfe9e-496a-4d70-bfdf-7401ea224c8b","Type":"ContainerStarted","Data":"49495d809c6fc984388d6413bd94df26582f0f65a1acb5ebca02d59e79635b7c"} Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.365354 4619 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pn8z7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.365411 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pn8z7" podUID="87b07f24-92c0-4190-a140-6029e82f826d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.405348 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-llkgb" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.435204 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:37 crc kubenswrapper[4619]: E0126 10:57:37.435423 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:37.935386654 +0000 UTC m=+156.969427370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.435941 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:37 crc kubenswrapper[4619]: E0126 10:57:37.441795 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:37.941777599 +0000 UTC m=+156.975818315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.461123 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.461166 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.478176 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.478241 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.482320 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.514981 4619 patch_prober.go:28] interesting pod/apiserver-76f77b778f-th5wb container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 10:57:37 crc kubenswrapper[4619]: [+]log ok Jan 26 10:57:37 crc kubenswrapper[4619]: [+]etcd ok Jan 26 10:57:37 crc kubenswrapper[4619]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 26 10:57:37 crc kubenswrapper[4619]: [+]poststarthook/generic-apiserver-start-informers ok Jan 26 10:57:37 crc kubenswrapper[4619]: [+]poststarthook/max-in-flight-filter ok Jan 26 10:57:37 crc kubenswrapper[4619]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 10:57:37 crc kubenswrapper[4619]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 26 10:57:37 crc kubenswrapper[4619]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 26 10:57:37 crc kubenswrapper[4619]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 26 10:57:37 crc kubenswrapper[4619]: [+]poststarthook/project.openshift.io-projectcache ok Jan 26 10:57:37 crc kubenswrapper[4619]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 26 10:57:37 crc kubenswrapper[4619]: [+]poststarthook/openshift.io-startinformers ok Jan 26 10:57:37 crc kubenswrapper[4619]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 26 10:57:37 crc kubenswrapper[4619]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 26 10:57:37 crc kubenswrapper[4619]: livez check failed Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.515114 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-th5wb" podUID="1f2ef0ca-8e7f-43c4-a6b8-5768a6dc8689" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.542225 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:37 crc kubenswrapper[4619]: E0126 10:57:37.543835 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:38.043811135 +0000 UTC m=+157.077851851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.629011 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bsqjj"] Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.630009 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bsqjj" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.639156 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.643778 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:37 crc kubenswrapper[4619]: E0126 10:57:37.644137 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:38.144123976 +0000 UTC m=+157.178164692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.652482 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bsqjj"] Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.752565 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.752783 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a310950-4656-4955-b453-e846f29f47d8-catalog-content\") pod \"certified-operators-bsqjj\" (UID: \"7a310950-4656-4955-b453-e846f29f47d8\") " pod="openshift-marketplace/certified-operators-bsqjj" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.752810 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a310950-4656-4955-b453-e846f29f47d8-utilities\") pod \"certified-operators-bsqjj\" (UID: \"7a310950-4656-4955-b453-e846f29f47d8\") " pod="openshift-marketplace/certified-operators-bsqjj" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.752850 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbk9q\" (UniqueName: \"kubernetes.io/projected/7a310950-4656-4955-b453-e846f29f47d8-kube-api-access-sbk9q\") pod \"certified-operators-bsqjj\" (UID: \"7a310950-4656-4955-b453-e846f29f47d8\") " pod="openshift-marketplace/certified-operators-bsqjj" Jan 26 10:57:37 crc kubenswrapper[4619]: E0126 10:57:37.752984 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:38.252963058 +0000 UTC m=+157.287003764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.766272 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zlrvs"] Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.767559 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zlrvs" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.780021 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.805633 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zlrvs"] Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.857404 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a310950-4656-4955-b453-e846f29f47d8-catalog-content\") pod \"certified-operators-bsqjj\" (UID: \"7a310950-4656-4955-b453-e846f29f47d8\") " pod="openshift-marketplace/certified-operators-bsqjj" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.857457 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.857477 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a310950-4656-4955-b453-e846f29f47d8-utilities\") pod \"certified-operators-bsqjj\" (UID: \"7a310950-4656-4955-b453-e846f29f47d8\") " pod="openshift-marketplace/certified-operators-bsqjj" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.857504 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/722ff7bc-6563-4562-b96f-430b1b2fedd1-catalog-content\") pod \"community-operators-zlrvs\" (UID: \"722ff7bc-6563-4562-b96f-430b1b2fedd1\") " pod="openshift-marketplace/community-operators-zlrvs" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.857536 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbk9q\" (UniqueName: \"kubernetes.io/projected/7a310950-4656-4955-b453-e846f29f47d8-kube-api-access-sbk9q\") pod \"certified-operators-bsqjj\" (UID: \"7a310950-4656-4955-b453-e846f29f47d8\") " pod="openshift-marketplace/certified-operators-bsqjj" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.857570 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8wgx\" (UniqueName: \"kubernetes.io/projected/722ff7bc-6563-4562-b96f-430b1b2fedd1-kube-api-access-x8wgx\") pod \"community-operators-zlrvs\" (UID: \"722ff7bc-6563-4562-b96f-430b1b2fedd1\") " pod="openshift-marketplace/community-operators-zlrvs" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.857599 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/722ff7bc-6563-4562-b96f-430b1b2fedd1-utilities\") pod \"community-operators-zlrvs\" (UID: \"722ff7bc-6563-4562-b96f-430b1b2fedd1\") " pod="openshift-marketplace/community-operators-zlrvs" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.858062 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a310950-4656-4955-b453-e846f29f47d8-catalog-content\") pod \"certified-operators-bsqjj\" (UID: \"7a310950-4656-4955-b453-e846f29f47d8\") " pod="openshift-marketplace/certified-operators-bsqjj" Jan 26 10:57:37 crc kubenswrapper[4619]: E0126 10:57:37.858392 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:38.358379932 +0000 UTC m=+157.392420648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.858788 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a310950-4656-4955-b453-e846f29f47d8-utilities\") pod \"certified-operators-bsqjj\" (UID: \"7a310950-4656-4955-b453-e846f29f47d8\") " pod="openshift-marketplace/certified-operators-bsqjj" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.893637 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbk9q\" (UniqueName: \"kubernetes.io/projected/7a310950-4656-4955-b453-e846f29f47d8-kube-api-access-sbk9q\") pod \"certified-operators-bsqjj\" (UID: \"7a310950-4656-4955-b453-e846f29f47d8\") " pod="openshift-marketplace/certified-operators-bsqjj" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.958370 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.958634 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/722ff7bc-6563-4562-b96f-430b1b2fedd1-catalog-content\") pod \"community-operators-zlrvs\" (UID: \"722ff7bc-6563-4562-b96f-430b1b2fedd1\") " pod="openshift-marketplace/community-operators-zlrvs" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.958697 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8wgx\" (UniqueName: \"kubernetes.io/projected/722ff7bc-6563-4562-b96f-430b1b2fedd1-kube-api-access-x8wgx\") pod \"community-operators-zlrvs\" (UID: \"722ff7bc-6563-4562-b96f-430b1b2fedd1\") " pod="openshift-marketplace/community-operators-zlrvs" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.958734 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/722ff7bc-6563-4562-b96f-430b1b2fedd1-utilities\") pod \"community-operators-zlrvs\" (UID: \"722ff7bc-6563-4562-b96f-430b1b2fedd1\") " pod="openshift-marketplace/community-operators-zlrvs" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.960130 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/722ff7bc-6563-4562-b96f-430b1b2fedd1-utilities\") pod \"community-operators-zlrvs\" (UID: \"722ff7bc-6563-4562-b96f-430b1b2fedd1\") " pod="openshift-marketplace/community-operators-zlrvs" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.960350 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/722ff7bc-6563-4562-b96f-430b1b2fedd1-catalog-content\") pod \"community-operators-zlrvs\" (UID: \"722ff7bc-6563-4562-b96f-430b1b2fedd1\") " pod="openshift-marketplace/community-operators-zlrvs" Jan 26 10:57:37 crc kubenswrapper[4619]: E0126 10:57:37.960497 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:38.460467139 +0000 UTC m=+157.494507865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.970127 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bsqjj" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.971999 4619 patch_prober.go:28] interesting pod/downloads-7954f5f757-k4prd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.972002 4619 patch_prober.go:28] interesting pod/downloads-7954f5f757-k4prd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.972078 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k4prd" podUID="97a177f5-24c5-4f7e-9bc6-8c234fe0cf19" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 26 10:57:37 crc kubenswrapper[4619]: I0126 10:57:37.972106 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-k4prd" podUID="97a177f5-24c5-4f7e-9bc6-8c234fe0cf19" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.010571 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8wgx\" (UniqueName: \"kubernetes.io/projected/722ff7bc-6563-4562-b96f-430b1b2fedd1-kube-api-access-x8wgx\") pod \"community-operators-zlrvs\" (UID: \"722ff7bc-6563-4562-b96f-430b1b2fedd1\") " pod="openshift-marketplace/community-operators-zlrvs" Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.044997 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bf7nn"] Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.046711 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bf7nn" Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.062708 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:38 crc kubenswrapper[4619]: E0126 10:57:38.063148 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:38.56313523 +0000 UTC m=+157.597175936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.085827 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bf7nn"] Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.093056 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zlrvs" Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.170327 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.170677 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b13c3450-3323-4a93-912f-e727bf9e75f3-catalog-content\") pod \"certified-operators-bf7nn\" (UID: \"b13c3450-3323-4a93-912f-e727bf9e75f3\") " pod="openshift-marketplace/certified-operators-bf7nn" Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.170750 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b13c3450-3323-4a93-912f-e727bf9e75f3-utilities\") pod \"certified-operators-bf7nn\" (UID: \"b13c3450-3323-4a93-912f-e727bf9e75f3\") " pod="openshift-marketplace/certified-operators-bf7nn" Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.170779 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfxc5\" (UniqueName: \"kubernetes.io/projected/b13c3450-3323-4a93-912f-e727bf9e75f3-kube-api-access-pfxc5\") pod \"certified-operators-bf7nn\" (UID: \"b13c3450-3323-4a93-912f-e727bf9e75f3\") " pod="openshift-marketplace/certified-operators-bf7nn" Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.174228 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jlwxk"] Jan 26 10:57:38 crc kubenswrapper[4619]: E0126 10:57:38.174749 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:38.674708794 +0000 UTC m=+157.708749510 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.174727 4619 patch_prober.go:28] interesting pod/router-default-5444994796-snrcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 10:57:38 crc kubenswrapper[4619]: [-]has-synced failed: reason withheld Jan 26 10:57:38 crc kubenswrapper[4619]: [+]process-running ok Jan 26 10:57:38 crc kubenswrapper[4619]: healthz check failed Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.174848 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snrcm" podUID="3865e56a-bf16-4cd0-b7c1-f74e98381c5b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.175251 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jlwxk" Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.281601 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jlwxk"] Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.283758 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j8xw\" (UniqueName: \"kubernetes.io/projected/a956e8a2-fc10-4698-a40f-71503dbd4542-kube-api-access-6j8xw\") pod \"community-operators-jlwxk\" (UID: \"a956e8a2-fc10-4698-a40f-71503dbd4542\") " pod="openshift-marketplace/community-operators-jlwxk" Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.283808 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b13c3450-3323-4a93-912f-e727bf9e75f3-utilities\") pod \"certified-operators-bf7nn\" (UID: \"b13c3450-3323-4a93-912f-e727bf9e75f3\") " pod="openshift-marketplace/certified-operators-bf7nn" Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.283839 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfxc5\" (UniqueName: \"kubernetes.io/projected/b13c3450-3323-4a93-912f-e727bf9e75f3-kube-api-access-pfxc5\") pod \"certified-operators-bf7nn\" (UID: \"b13c3450-3323-4a93-912f-e727bf9e75f3\") " pod="openshift-marketplace/certified-operators-bf7nn" Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.283899 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a956e8a2-fc10-4698-a40f-71503dbd4542-catalog-content\") pod \"community-operators-jlwxk\" (UID: \"a956e8a2-fc10-4698-a40f-71503dbd4542\") " pod="openshift-marketplace/community-operators-jlwxk" Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.283935 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b13c3450-3323-4a93-912f-e727bf9e75f3-catalog-content\") pod \"certified-operators-bf7nn\" (UID: \"b13c3450-3323-4a93-912f-e727bf9e75f3\") " pod="openshift-marketplace/certified-operators-bf7nn" Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.283962 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a956e8a2-fc10-4698-a40f-71503dbd4542-utilities\") pod \"community-operators-jlwxk\" (UID: \"a956e8a2-fc10-4698-a40f-71503dbd4542\") " pod="openshift-marketplace/community-operators-jlwxk" Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.284029 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:38 crc kubenswrapper[4619]: E0126 10:57:38.284374 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:38.784362817 +0000 UTC m=+157.818403533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.285029 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b13c3450-3323-4a93-912f-e727bf9e75f3-utilities\") pod \"certified-operators-bf7nn\" (UID: \"b13c3450-3323-4a93-912f-e727bf9e75f3\") " pod="openshift-marketplace/certified-operators-bf7nn" Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.285777 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b13c3450-3323-4a93-912f-e727bf9e75f3-catalog-content\") pod \"certified-operators-bf7nn\" (UID: \"b13c3450-3323-4a93-912f-e727bf9e75f3\") " pod="openshift-marketplace/certified-operators-bf7nn" Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.365097 4619 patch_prober.go:28] interesting pod/console-operator-58897d9998-6xg9r container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.365191 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6xg9r" podUID="27a11a49-1783-4016-a84f-25b5c4eb9584" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.386202 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.386427 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j8xw\" (UniqueName: \"kubernetes.io/projected/a956e8a2-fc10-4698-a40f-71503dbd4542-kube-api-access-6j8xw\") pod \"community-operators-jlwxk\" (UID: \"a956e8a2-fc10-4698-a40f-71503dbd4542\") " pod="openshift-marketplace/community-operators-jlwxk" Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.386477 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a956e8a2-fc10-4698-a40f-71503dbd4542-catalog-content\") pod \"community-operators-jlwxk\" (UID: \"a956e8a2-fc10-4698-a40f-71503dbd4542\") " pod="openshift-marketplace/community-operators-jlwxk" Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.386513 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a956e8a2-fc10-4698-a40f-71503dbd4542-utilities\") pod \"community-operators-jlwxk\" (UID: \"a956e8a2-fc10-4698-a40f-71503dbd4542\") " pod="openshift-marketplace/community-operators-jlwxk" Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.387031 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a956e8a2-fc10-4698-a40f-71503dbd4542-utilities\") pod \"community-operators-jlwxk\" (UID: \"a956e8a2-fc10-4698-a40f-71503dbd4542\") " pod="openshift-marketplace/community-operators-jlwxk" Jan 26 10:57:38 crc kubenswrapper[4619]: E0126 10:57:38.387195 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:38.887175303 +0000 UTC m=+157.921216019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.387667 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a956e8a2-fc10-4698-a40f-71503dbd4542-catalog-content\") pod \"community-operators-jlwxk\" (UID: \"a956e8a2-fc10-4698-a40f-71503dbd4542\") " pod="openshift-marketplace/community-operators-jlwxk" Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.410414 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfxc5\" (UniqueName: \"kubernetes.io/projected/b13c3450-3323-4a93-912f-e727bf9e75f3-kube-api-access-pfxc5\") pod \"certified-operators-bf7nn\" (UID: \"b13c3450-3323-4a93-912f-e727bf9e75f3\") " pod="openshift-marketplace/certified-operators-bf7nn" Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.432232 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2sj2" Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.439395 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j8xw\" (UniqueName: \"kubernetes.io/projected/a956e8a2-fc10-4698-a40f-71503dbd4542-kube-api-access-6j8xw\") pod \"community-operators-jlwxk\" (UID: \"a956e8a2-fc10-4698-a40f-71503dbd4542\") " pod="openshift-marketplace/community-operators-jlwxk" Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.498336 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-zdjpz" Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.498414 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-zdjpz" Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.499591 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:38 crc kubenswrapper[4619]: E0126 10:57:38.503756 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:39.003736485 +0000 UTC m=+158.037777201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.504963 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jlwxk" Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.505161 4619 patch_prober.go:28] interesting pod/console-f9d7485db-zdjpz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.505217 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-zdjpz" podUID="3b59d0dd-baeb-4a81-989b-7ee68bfa06aa" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.602251 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:38 crc kubenswrapper[4619]: E0126 10:57:38.603951 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:39.103930023 +0000 UTC m=+158.137970739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.677400 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bf7nn" Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.704703 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:38 crc kubenswrapper[4619]: E0126 10:57:38.705069 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:39.205051846 +0000 UTC m=+158.239092562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.705423 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bsqjj"] Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.805478 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:38 crc kubenswrapper[4619]: E0126 10:57:38.805953 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:39.305932291 +0000 UTC m=+158.339973007 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:38 crc kubenswrapper[4619]: I0126 10:57:38.912757 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:38 crc kubenswrapper[4619]: E0126 10:57:38.913586 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:39.413570633 +0000 UTC m=+158.447611349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.014017 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:39 crc kubenswrapper[4619]: E0126 10:57:39.014348 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:39.514331535 +0000 UTC m=+158.548372251 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.085738 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zlrvs"] Jan 26 10:57:39 crc kubenswrapper[4619]: W0126 10:57:39.097395 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod722ff7bc_6563_4562_b96f_430b1b2fedd1.slice/crio-68b37fcb6502fa077dcbdeac280c3b9642ddf02e4dc95bcc4983b5101511a2a4 WatchSource:0}: Error finding container 68b37fcb6502fa077dcbdeac280c3b9642ddf02e4dc95bcc4983b5101511a2a4: Status 404 returned error can't find the container with id 68b37fcb6502fa077dcbdeac280c3b9642ddf02e4dc95bcc4983b5101511a2a4 Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.115399 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:39 crc kubenswrapper[4619]: E0126 10:57:39.116037 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:39.615997071 +0000 UTC m=+158.650037787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.147924 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-snrcm" Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.154841 4619 patch_prober.go:28] interesting pod/router-default-5444994796-snrcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 10:57:39 crc kubenswrapper[4619]: [-]has-synced failed: reason withheld Jan 26 10:57:39 crc kubenswrapper[4619]: [+]process-running ok Jan 26 10:57:39 crc kubenswrapper[4619]: healthz check failed Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.154904 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snrcm" podUID="3865e56a-bf16-4cd0-b7c1-f74e98381c5b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.216309 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:39 crc kubenswrapper[4619]: E0126 10:57:39.216840 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:39.716820395 +0000 UTC m=+158.750861111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.303212 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-pn8z7" Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.324752 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:39 crc kubenswrapper[4619]: E0126 10:57:39.326358 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:39.826342105 +0000 UTC m=+158.860382821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.405964 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlrvs" event={"ID":"722ff7bc-6563-4562-b96f-430b1b2fedd1","Type":"ContainerStarted","Data":"68b37fcb6502fa077dcbdeac280c3b9642ddf02e4dc95bcc4983b5101511a2a4"} Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.425315 4619 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.425717 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsqjj" event={"ID":"7a310950-4656-4955-b453-e846f29f47d8","Type":"ContainerStarted","Data":"e61163909f1d35dd1826224e57970a32f58185d9928a2c2caf74791612ab42ae"} Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.426385 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:39 crc kubenswrapper[4619]: E0126 10:57:39.427853 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:39.927827407 +0000 UTC m=+158.961868123 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.476449 4619 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-26T10:57:39.425374323Z","Handler":null,"Name":""} Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.478830 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ff7zj" event={"ID":"679cfe9e-496a-4d70-bfdf-7401ea224c8b","Type":"ContainerStarted","Data":"a46cd0efda0e1e92c6122ebece15e4b0df175079d69d7f61013d4fc309e0ae50"} Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.487315 4619 generic.go:334] "Generic (PLEG): container finished" podID="332f48dc-fb71-4e6e-bb33-c416af7b743b" containerID="51c15111aa612990cbfbdb938357cb97074a83a129a8f8f99d2a6242f45f94c2" exitCode=0 Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.488478 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490405-2tm2l" event={"ID":"332f48dc-fb71-4e6e-bb33-c416af7b743b","Type":"ContainerDied","Data":"51c15111aa612990cbfbdb938357cb97074a83a129a8f8f99d2a6242f45f94c2"} Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.529121 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:39 crc kubenswrapper[4619]: E0126 10:57:39.529501 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 10:57:40.029487003 +0000 UTC m=+159.063527719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-84lr5" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.534729 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-ff7zj" podStartSLOduration=14.534703718 podStartE2EDuration="14.534703718s" podCreationTimestamp="2026-01-26 10:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:39.52975478 +0000 UTC m=+158.563795496" watchObservedRunningTime="2026-01-26 10:57:39.534703718 +0000 UTC m=+158.568744434" Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.589102 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jlwxk"] Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.633170 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:39 crc kubenswrapper[4619]: E0126 10:57:39.635194 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 10:57:40.135149873 +0000 UTC m=+159.169190589 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.636303 4619 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.636353 4619 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.695334 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-6xg9r" Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.735658 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.756702 4619 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.756764 4619 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.767224 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tvhzw"] Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.768466 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tvhzw" Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.783790 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.886281 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvhzw"] Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.941315 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-467ds\" (UniqueName: \"kubernetes.io/projected/3d6d6055-6441-4f47-8107-8886901691cc-kube-api-access-467ds\") pod \"redhat-marketplace-tvhzw\" (UID: \"3d6d6055-6441-4f47-8107-8886901691cc\") " pod="openshift-marketplace/redhat-marketplace-tvhzw" Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.941378 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d6d6055-6441-4f47-8107-8886901691cc-utilities\") pod \"redhat-marketplace-tvhzw\" (UID: \"3d6d6055-6441-4f47-8107-8886901691cc\") " pod="openshift-marketplace/redhat-marketplace-tvhzw" Jan 26 10:57:39 crc kubenswrapper[4619]: I0126 10:57:39.941401 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d6d6055-6441-4f47-8107-8886901691cc-catalog-content\") pod \"redhat-marketplace-tvhzw\" (UID: \"3d6d6055-6441-4f47-8107-8886901691cc\") " pod="openshift-marketplace/redhat-marketplace-tvhzw" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.024434 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bf7nn"] Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.044411 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d6d6055-6441-4f47-8107-8886901691cc-utilities\") pod \"redhat-marketplace-tvhzw\" (UID: \"3d6d6055-6441-4f47-8107-8886901691cc\") " pod="openshift-marketplace/redhat-marketplace-tvhzw" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.044592 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d6d6055-6441-4f47-8107-8886901691cc-catalog-content\") pod \"redhat-marketplace-tvhzw\" (UID: \"3d6d6055-6441-4f47-8107-8886901691cc\") " pod="openshift-marketplace/redhat-marketplace-tvhzw" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.044883 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d6d6055-6441-4f47-8107-8886901691cc-utilities\") pod \"redhat-marketplace-tvhzw\" (UID: \"3d6d6055-6441-4f47-8107-8886901691cc\") " pod="openshift-marketplace/redhat-marketplace-tvhzw" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.044930 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-467ds\" (UniqueName: \"kubernetes.io/projected/3d6d6055-6441-4f47-8107-8886901691cc-kube-api-access-467ds\") pod \"redhat-marketplace-tvhzw\" (UID: \"3d6d6055-6441-4f47-8107-8886901691cc\") " pod="openshift-marketplace/redhat-marketplace-tvhzw" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.045125 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d6d6055-6441-4f47-8107-8886901691cc-catalog-content\") pod \"redhat-marketplace-tvhzw\" (UID: \"3d6d6055-6441-4f47-8107-8886901691cc\") " pod="openshift-marketplace/redhat-marketplace-tvhzw" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.075908 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-467ds\" (UniqueName: \"kubernetes.io/projected/3d6d6055-6441-4f47-8107-8886901691cc-kube-api-access-467ds\") pod \"redhat-marketplace-tvhzw\" (UID: \"3d6d6055-6441-4f47-8107-8886901691cc\") " pod="openshift-marketplace/redhat-marketplace-tvhzw" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.101715 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tvhzw" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.119992 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-84lr5\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.146559 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.157970 4619 patch_prober.go:28] interesting pod/router-default-5444994796-snrcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 10:57:40 crc kubenswrapper[4619]: [-]has-synced failed: reason withheld Jan 26 10:57:40 crc kubenswrapper[4619]: [+]process-running ok Jan 26 10:57:40 crc kubenswrapper[4619]: healthz check failed Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.158047 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snrcm" podUID="3865e56a-bf16-4cd0-b7c1-f74e98381c5b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.162134 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-krm7h"] Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.163583 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-krm7h" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.173866 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.189290 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-krm7h"] Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.248395 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac07b4ac-4523-451f-91c0-9c4754786fce-utilities\") pod \"redhat-marketplace-krm7h\" (UID: \"ac07b4ac-4523-451f-91c0-9c4754786fce\") " pod="openshift-marketplace/redhat-marketplace-krm7h" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.248494 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac07b4ac-4523-451f-91c0-9c4754786fce-catalog-content\") pod \"redhat-marketplace-krm7h\" (UID: \"ac07b4ac-4523-451f-91c0-9c4754786fce\") " pod="openshift-marketplace/redhat-marketplace-krm7h" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.248722 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb2j6\" (UniqueName: \"kubernetes.io/projected/ac07b4ac-4523-451f-91c0-9c4754786fce-kube-api-access-xb2j6\") pod \"redhat-marketplace-krm7h\" (UID: \"ac07b4ac-4523-451f-91c0-9c4754786fce\") " pod="openshift-marketplace/redhat-marketplace-krm7h" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.255676 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.350396 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb2j6\" (UniqueName: \"kubernetes.io/projected/ac07b4ac-4523-451f-91c0-9c4754786fce-kube-api-access-xb2j6\") pod \"redhat-marketplace-krm7h\" (UID: \"ac07b4ac-4523-451f-91c0-9c4754786fce\") " pod="openshift-marketplace/redhat-marketplace-krm7h" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.350790 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac07b4ac-4523-451f-91c0-9c4754786fce-utilities\") pod \"redhat-marketplace-krm7h\" (UID: \"ac07b4ac-4523-451f-91c0-9c4754786fce\") " pod="openshift-marketplace/redhat-marketplace-krm7h" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.350859 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac07b4ac-4523-451f-91c0-9c4754786fce-catalog-content\") pod \"redhat-marketplace-krm7h\" (UID: \"ac07b4ac-4523-451f-91c0-9c4754786fce\") " pod="openshift-marketplace/redhat-marketplace-krm7h" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.352067 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac07b4ac-4523-451f-91c0-9c4754786fce-utilities\") pod \"redhat-marketplace-krm7h\" (UID: \"ac07b4ac-4523-451f-91c0-9c4754786fce\") " pod="openshift-marketplace/redhat-marketplace-krm7h" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.362093 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac07b4ac-4523-451f-91c0-9c4754786fce-catalog-content\") pod \"redhat-marketplace-krm7h\" (UID: \"ac07b4ac-4523-451f-91c0-9c4754786fce\") " pod="openshift-marketplace/redhat-marketplace-krm7h" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.386236 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb2j6\" (UniqueName: \"kubernetes.io/projected/ac07b4ac-4523-451f-91c0-9c4754786fce-kube-api-access-xb2j6\") pod \"redhat-marketplace-krm7h\" (UID: \"ac07b4ac-4523-451f-91c0-9c4754786fce\") " pod="openshift-marketplace/redhat-marketplace-krm7h" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.497994 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-krm7h" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.531000 4619 generic.go:334] "Generic (PLEG): container finished" podID="a956e8a2-fc10-4698-a40f-71503dbd4542" containerID="d502c5aa21d83d24d0cd02f14bb0b47e1f191d2f4c4f643fe2a38a1d2af867bb" exitCode=0 Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.531093 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jlwxk" event={"ID":"a956e8a2-fc10-4698-a40f-71503dbd4542","Type":"ContainerDied","Data":"d502c5aa21d83d24d0cd02f14bb0b47e1f191d2f4c4f643fe2a38a1d2af867bb"} Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.531129 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jlwxk" event={"ID":"a956e8a2-fc10-4698-a40f-71503dbd4542","Type":"ContainerStarted","Data":"ccccd02fb5f3225e2331cdce02d5ca946ef3bc95e3b8f6eb58feb40651983757"} Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.545088 4619 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.553710 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvhzw"] Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.562907 4619 generic.go:334] "Generic (PLEG): container finished" podID="722ff7bc-6563-4562-b96f-430b1b2fedd1" containerID="98f7138793e6eabd97db1f0d370a7730d72666200d77bec5af46a885759596c7" exitCode=0 Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.563158 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlrvs" event={"ID":"722ff7bc-6563-4562-b96f-430b1b2fedd1","Type":"ContainerDied","Data":"98f7138793e6eabd97db1f0d370a7730d72666200d77bec5af46a885759596c7"} Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.583430 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.584960 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.589045 4619 generic.go:334] "Generic (PLEG): container finished" podID="7a310950-4656-4955-b453-e846f29f47d8" containerID="935ffbe8d6b2832ba39d846d1983290e965f0b9cd5bd7843351462bd82362e60" exitCode=0 Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.589155 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsqjj" event={"ID":"7a310950-4656-4955-b453-e846f29f47d8","Type":"ContainerDied","Data":"935ffbe8d6b2832ba39d846d1983290e965f0b9cd5bd7843351462bd82362e60"} Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.589390 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.596159 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.624696 4619 generic.go:334] "Generic (PLEG): container finished" podID="b13c3450-3323-4a93-912f-e727bf9e75f3" containerID="301c7aafdde3152e5cb030f2059744f14ce4d6fa1ca9ef8893cd524d608f88a6" exitCode=0 Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.624975 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bf7nn" event={"ID":"b13c3450-3323-4a93-912f-e727bf9e75f3","Type":"ContainerDied","Data":"301c7aafdde3152e5cb030f2059744f14ce4d6fa1ca9ef8893cd524d608f88a6"} Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.625035 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bf7nn" event={"ID":"b13c3450-3323-4a93-912f-e727bf9e75f3","Type":"ContainerStarted","Data":"84373ef920155313ec97956d0417c12205e11a54496de0c7bd2b4f482184f0c4"} Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.635692 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.684495 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-84lr5"] Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.763903 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7mntj"] Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.766384 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7mntj" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.772939 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.789849 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a241b5bb-7416-4e64-a2d6-2cf6145fea80-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a241b5bb-7416-4e64-a2d6-2cf6145fea80\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.789906 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a241b5bb-7416-4e64-a2d6-2cf6145fea80-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a241b5bb-7416-4e64-a2d6-2cf6145fea80\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.809366 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7mntj"] Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.892572 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a241b5bb-7416-4e64-a2d6-2cf6145fea80-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a241b5bb-7416-4e64-a2d6-2cf6145fea80\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.892636 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74eaac61-fd26-4596-ab74-d1282c0baf2b-utilities\") pod \"redhat-operators-7mntj\" (UID: \"74eaac61-fd26-4596-ab74-d1282c0baf2b\") " pod="openshift-marketplace/redhat-operators-7mntj" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.892664 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a241b5bb-7416-4e64-a2d6-2cf6145fea80-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a241b5bb-7416-4e64-a2d6-2cf6145fea80\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.892717 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4ql9\" (UniqueName: \"kubernetes.io/projected/74eaac61-fd26-4596-ab74-d1282c0baf2b-kube-api-access-f4ql9\") pod \"redhat-operators-7mntj\" (UID: \"74eaac61-fd26-4596-ab74-d1282c0baf2b\") " pod="openshift-marketplace/redhat-operators-7mntj" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.892765 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74eaac61-fd26-4596-ab74-d1282c0baf2b-catalog-content\") pod \"redhat-operators-7mntj\" (UID: \"74eaac61-fd26-4596-ab74-d1282c0baf2b\") " pod="openshift-marketplace/redhat-operators-7mntj" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.893142 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a241b5bb-7416-4e64-a2d6-2cf6145fea80-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a241b5bb-7416-4e64-a2d6-2cf6145fea80\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.924714 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a241b5bb-7416-4e64-a2d6-2cf6145fea80-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a241b5bb-7416-4e64-a2d6-2cf6145fea80\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.925046 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.994606 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4ql9\" (UniqueName: \"kubernetes.io/projected/74eaac61-fd26-4596-ab74-d1282c0baf2b-kube-api-access-f4ql9\") pod \"redhat-operators-7mntj\" (UID: \"74eaac61-fd26-4596-ab74-d1282c0baf2b\") " pod="openshift-marketplace/redhat-operators-7mntj" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.995131 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74eaac61-fd26-4596-ab74-d1282c0baf2b-catalog-content\") pod \"redhat-operators-7mntj\" (UID: \"74eaac61-fd26-4596-ab74-d1282c0baf2b\") " pod="openshift-marketplace/redhat-operators-7mntj" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.995185 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74eaac61-fd26-4596-ab74-d1282c0baf2b-utilities\") pod \"redhat-operators-7mntj\" (UID: \"74eaac61-fd26-4596-ab74-d1282c0baf2b\") " pod="openshift-marketplace/redhat-operators-7mntj" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.995814 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74eaac61-fd26-4596-ab74-d1282c0baf2b-utilities\") pod \"redhat-operators-7mntj\" (UID: \"74eaac61-fd26-4596-ab74-d1282c0baf2b\") " pod="openshift-marketplace/redhat-operators-7mntj" Jan 26 10:57:40 crc kubenswrapper[4619]: I0126 10:57:40.996490 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74eaac61-fd26-4596-ab74-d1282c0baf2b-catalog-content\") pod \"redhat-operators-7mntj\" (UID: \"74eaac61-fd26-4596-ab74-d1282c0baf2b\") " pod="openshift-marketplace/redhat-operators-7mntj" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.033962 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4ql9\" (UniqueName: \"kubernetes.io/projected/74eaac61-fd26-4596-ab74-d1282c0baf2b-kube-api-access-f4ql9\") pod \"redhat-operators-7mntj\" (UID: \"74eaac61-fd26-4596-ab74-d1282c0baf2b\") " pod="openshift-marketplace/redhat-operators-7mntj" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.111408 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7mntj" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.153592 4619 patch_prober.go:28] interesting pod/router-default-5444994796-snrcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 10:57:41 crc kubenswrapper[4619]: [-]has-synced failed: reason withheld Jan 26 10:57:41 crc kubenswrapper[4619]: [+]process-running ok Jan 26 10:57:41 crc kubenswrapper[4619]: healthz check failed Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.153671 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snrcm" podUID="3865e56a-bf16-4cd0-b7c1-f74e98381c5b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.167478 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wknfd"] Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.168730 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wknfd" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.178775 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-krm7h"] Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.220910 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490405-2tm2l" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.241161 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wknfd"] Jan 26 10:57:41 crc kubenswrapper[4619]: W0126 10:57:41.247175 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac07b4ac_4523_451f_91c0_9c4754786fce.slice/crio-cf80e00b5df6715f99300d45f90b71834d46db29ce3230a2154b855d102980c9 WatchSource:0}: Error finding container cf80e00b5df6715f99300d45f90b71834d46db29ce3230a2154b855d102980c9: Status 404 returned error can't find the container with id cf80e00b5df6715f99300d45f90b71834d46db29ce3230a2154b855d102980c9 Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.288711 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.314450 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8f86935-b38f-41f5-a236-0d09213a5077-catalog-content\") pod \"redhat-operators-wknfd\" (UID: \"c8f86935-b38f-41f5-a236-0d09213a5077\") " pod="openshift-marketplace/redhat-operators-wknfd" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.314531 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86fcs\" (UniqueName: \"kubernetes.io/projected/c8f86935-b38f-41f5-a236-0d09213a5077-kube-api-access-86fcs\") pod \"redhat-operators-wknfd\" (UID: \"c8f86935-b38f-41f5-a236-0d09213a5077\") " pod="openshift-marketplace/redhat-operators-wknfd" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.314571 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8f86935-b38f-41f5-a236-0d09213a5077-utilities\") pod \"redhat-operators-wknfd\" (UID: \"c8f86935-b38f-41f5-a236-0d09213a5077\") " pod="openshift-marketplace/redhat-operators-wknfd" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.410574 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 10:57:41 crc kubenswrapper[4619]: E0126 10:57:41.410832 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="332f48dc-fb71-4e6e-bb33-c416af7b743b" containerName="collect-profiles" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.410843 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="332f48dc-fb71-4e6e-bb33-c416af7b743b" containerName="collect-profiles" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.410965 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="332f48dc-fb71-4e6e-bb33-c416af7b743b" containerName="collect-profiles" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.411391 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.419853 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqdkh\" (UniqueName: \"kubernetes.io/projected/332f48dc-fb71-4e6e-bb33-c416af7b743b-kube-api-access-kqdkh\") pod \"332f48dc-fb71-4e6e-bb33-c416af7b743b\" (UID: \"332f48dc-fb71-4e6e-bb33-c416af7b743b\") " Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.419920 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/332f48dc-fb71-4e6e-bb33-c416af7b743b-secret-volume\") pod \"332f48dc-fb71-4e6e-bb33-c416af7b743b\" (UID: \"332f48dc-fb71-4e6e-bb33-c416af7b743b\") " Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.420031 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/332f48dc-fb71-4e6e-bb33-c416af7b743b-config-volume\") pod \"332f48dc-fb71-4e6e-bb33-c416af7b743b\" (UID: \"332f48dc-fb71-4e6e-bb33-c416af7b743b\") " Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.420244 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86fcs\" (UniqueName: \"kubernetes.io/projected/c8f86935-b38f-41f5-a236-0d09213a5077-kube-api-access-86fcs\") pod \"redhat-operators-wknfd\" (UID: \"c8f86935-b38f-41f5-a236-0d09213a5077\") " pod="openshift-marketplace/redhat-operators-wknfd" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.420300 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8f86935-b38f-41f5-a236-0d09213a5077-utilities\") pod \"redhat-operators-wknfd\" (UID: \"c8f86935-b38f-41f5-a236-0d09213a5077\") " pod="openshift-marketplace/redhat-operators-wknfd" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.420347 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8f86935-b38f-41f5-a236-0d09213a5077-catalog-content\") pod \"redhat-operators-wknfd\" (UID: \"c8f86935-b38f-41f5-a236-0d09213a5077\") " pod="openshift-marketplace/redhat-operators-wknfd" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.420817 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8f86935-b38f-41f5-a236-0d09213a5077-catalog-content\") pod \"redhat-operators-wknfd\" (UID: \"c8f86935-b38f-41f5-a236-0d09213a5077\") " pod="openshift-marketplace/redhat-operators-wknfd" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.422399 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/332f48dc-fb71-4e6e-bb33-c416af7b743b-config-volume" (OuterVolumeSpecName: "config-volume") pod "332f48dc-fb71-4e6e-bb33-c416af7b743b" (UID: "332f48dc-fb71-4e6e-bb33-c416af7b743b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.425812 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.426072 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.427607 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8f86935-b38f-41f5-a236-0d09213a5077-utilities\") pod \"redhat-operators-wknfd\" (UID: \"c8f86935-b38f-41f5-a236-0d09213a5077\") " pod="openshift-marketplace/redhat-operators-wknfd" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.429586 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/332f48dc-fb71-4e6e-bb33-c416af7b743b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "332f48dc-fb71-4e6e-bb33-c416af7b743b" (UID: "332f48dc-fb71-4e6e-bb33-c416af7b743b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.434929 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.446767 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/332f48dc-fb71-4e6e-bb33-c416af7b743b-kube-api-access-kqdkh" (OuterVolumeSpecName: "kube-api-access-kqdkh") pod "332f48dc-fb71-4e6e-bb33-c416af7b743b" (UID: "332f48dc-fb71-4e6e-bb33-c416af7b743b"). InnerVolumeSpecName "kube-api-access-kqdkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.477127 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86fcs\" (UniqueName: \"kubernetes.io/projected/c8f86935-b38f-41f5-a236-0d09213a5077-kube-api-access-86fcs\") pod \"redhat-operators-wknfd\" (UID: \"c8f86935-b38f-41f5-a236-0d09213a5077\") " pod="openshift-marketplace/redhat-operators-wknfd" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.526252 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a12d46a3-06e3-4d54-91bd-7e5493755a22-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a12d46a3-06e3-4d54-91bd-7e5493755a22\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.537220 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a12d46a3-06e3-4d54-91bd-7e5493755a22-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a12d46a3-06e3-4d54-91bd-7e5493755a22\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.537335 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wknfd" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.537399 4619 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/332f48dc-fb71-4e6e-bb33-c416af7b743b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.537423 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqdkh\" (UniqueName: \"kubernetes.io/projected/332f48dc-fb71-4e6e-bb33-c416af7b743b-kube-api-access-kqdkh\") on node \"crc\" DevicePath \"\"" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.537450 4619 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/332f48dc-fb71-4e6e-bb33-c416af7b743b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.580462 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.638468 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a12d46a3-06e3-4d54-91bd-7e5493755a22-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a12d46a3-06e3-4d54-91bd-7e5493755a22\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.638581 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a12d46a3-06e3-4d54-91bd-7e5493755a22-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a12d46a3-06e3-4d54-91bd-7e5493755a22\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.638864 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a12d46a3-06e3-4d54-91bd-7e5493755a22-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a12d46a3-06e3-4d54-91bd-7e5493755a22\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.646598 4619 generic.go:334] "Generic (PLEG): container finished" podID="3d6d6055-6441-4f47-8107-8886901691cc" containerID="ddc23e268cac64005f25ea547210357012aa148d4eef396bdd0c1a796982b84e" exitCode=0 Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.646725 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvhzw" event={"ID":"3d6d6055-6441-4f47-8107-8886901691cc","Type":"ContainerDied","Data":"ddc23e268cac64005f25ea547210357012aa148d4eef396bdd0c1a796982b84e"} Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.646767 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvhzw" event={"ID":"3d6d6055-6441-4f47-8107-8886901691cc","Type":"ContainerStarted","Data":"0e197741cbab0b371c21d5646975244bb96d819a27fb0311e64ecead88b98073"} Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.648368 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490405-2tm2l" event={"ID":"332f48dc-fb71-4e6e-bb33-c416af7b743b","Type":"ContainerDied","Data":"e0a2d6b6be407953821540e1c3824963d1bb8d7d2398b39b0eae6da225d5724a"} Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.648404 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0a2d6b6be407953821540e1c3824963d1bb8d7d2398b39b0eae6da225d5724a" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.648532 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490405-2tm2l" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.683126 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" event={"ID":"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6","Type":"ContainerStarted","Data":"be7153fae68659e0fa817b2f8082f6d124a8cf9a6d644a7ef63861792aacc96a"} Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.683197 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" event={"ID":"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6","Type":"ContainerStarted","Data":"c3a36e3831b833e5679daea878971a7d8fa3c37a098ecb26f320e8c108f2ab81"} Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.685107 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.691327 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a12d46a3-06e3-4d54-91bd-7e5493755a22-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a12d46a3-06e3-4d54-91bd-7e5493755a22\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.697961 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a241b5bb-7416-4e64-a2d6-2cf6145fea80","Type":"ContainerStarted","Data":"4dfbdbec686787908f89a53dce212fba2f89ef613996bc00f82cdaf6ce681c2c"} Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.713787 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" podStartSLOduration=137.713765381 podStartE2EDuration="2m17.713765381s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:41.711156253 +0000 UTC m=+160.745196989" watchObservedRunningTime="2026-01-26 10:57:41.713765381 +0000 UTC m=+160.747806097" Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.767883 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krm7h" event={"ID":"ac07b4ac-4523-451f-91c0-9c4754786fce","Type":"ContainerStarted","Data":"cf80e00b5df6715f99300d45f90b71834d46db29ce3230a2154b855d102980c9"} Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.845369 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7mntj"] Jan 26 10:57:41 crc kubenswrapper[4619]: I0126 10:57:41.869832 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 10:57:41 crc kubenswrapper[4619]: W0126 10:57:41.907028 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74eaac61_fd26_4596_ab74_d1282c0baf2b.slice/crio-294f4a96ede982e10f8da8776e90ef9d5fb7d03e4776d5c267fc0104969287f7 WatchSource:0}: Error finding container 294f4a96ede982e10f8da8776e90ef9d5fb7d03e4776d5c267fc0104969287f7: Status 404 returned error can't find the container with id 294f4a96ede982e10f8da8776e90ef9d5fb7d03e4776d5c267fc0104969287f7 Jan 26 10:57:42 crc kubenswrapper[4619]: I0126 10:57:42.152576 4619 patch_prober.go:28] interesting pod/router-default-5444994796-snrcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 10:57:42 crc kubenswrapper[4619]: [-]has-synced failed: reason withheld Jan 26 10:57:42 crc kubenswrapper[4619]: [+]process-running ok Jan 26 10:57:42 crc kubenswrapper[4619]: healthz check failed Jan 26 10:57:42 crc kubenswrapper[4619]: I0126 10:57:42.152755 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snrcm" podUID="3865e56a-bf16-4cd0-b7c1-f74e98381c5b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 10:57:42 crc kubenswrapper[4619]: I0126 10:57:42.186093 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wknfd"] Jan 26 10:57:42 crc kubenswrapper[4619]: I0126 10:57:42.488058 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:42 crc kubenswrapper[4619]: I0126 10:57:42.493916 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-th5wb" Jan 26 10:57:42 crc kubenswrapper[4619]: I0126 10:57:42.870650 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 10:57:42 crc kubenswrapper[4619]: I0126 10:57:42.918702 4619 generic.go:334] "Generic (PLEG): container finished" podID="ac07b4ac-4523-451f-91c0-9c4754786fce" containerID="0aafe1986c9cafee8703646bb92c90d3b956a2692b5cceb00f70592326b27ea0" exitCode=0 Jan 26 10:57:42 crc kubenswrapper[4619]: I0126 10:57:42.918807 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krm7h" event={"ID":"ac07b4ac-4523-451f-91c0-9c4754786fce","Type":"ContainerDied","Data":"0aafe1986c9cafee8703646bb92c90d3b956a2692b5cceb00f70592326b27ea0"} Jan 26 10:57:42 crc kubenswrapper[4619]: I0126 10:57:42.992464 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wknfd" event={"ID":"c8f86935-b38f-41f5-a236-0d09213a5077","Type":"ContainerStarted","Data":"969c872876824c81d408254faa0c3cbb9ae80b7f9870a0f75fb50d9352e96f70"} Jan 26 10:57:42 crc kubenswrapper[4619]: I0126 10:57:42.992514 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wknfd" event={"ID":"c8f86935-b38f-41f5-a236-0d09213a5077","Type":"ContainerStarted","Data":"c6c9b9f7c0cc510ac813bb40f83762fa7b10c896dbfdc9bf17d7b607b827a3a2"} Jan 26 10:57:43 crc kubenswrapper[4619]: I0126 10:57:43.035908 4619 generic.go:334] "Generic (PLEG): container finished" podID="74eaac61-fd26-4596-ab74-d1282c0baf2b" containerID="2034dde0b37929f9d4a247913e23c94c4ca038f68fcc455e5c5dbb831cc7fb68" exitCode=0 Jan 26 10:57:43 crc kubenswrapper[4619]: I0126 10:57:43.037391 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mntj" event={"ID":"74eaac61-fd26-4596-ab74-d1282c0baf2b","Type":"ContainerDied","Data":"2034dde0b37929f9d4a247913e23c94c4ca038f68fcc455e5c5dbb831cc7fb68"} Jan 26 10:57:43 crc kubenswrapper[4619]: I0126 10:57:43.037426 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mntj" event={"ID":"74eaac61-fd26-4596-ab74-d1282c0baf2b","Type":"ContainerStarted","Data":"294f4a96ede982e10f8da8776e90ef9d5fb7d03e4776d5c267fc0104969287f7"} Jan 26 10:57:43 crc kubenswrapper[4619]: I0126 10:57:43.165912 4619 patch_prober.go:28] interesting pod/router-default-5444994796-snrcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 10:57:43 crc kubenswrapper[4619]: [-]has-synced failed: reason withheld Jan 26 10:57:43 crc kubenswrapper[4619]: [+]process-running ok Jan 26 10:57:43 crc kubenswrapper[4619]: healthz check failed Jan 26 10:57:43 crc kubenswrapper[4619]: I0126 10:57:43.165980 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snrcm" podUID="3865e56a-bf16-4cd0-b7c1-f74e98381c5b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 10:57:44 crc kubenswrapper[4619]: I0126 10:57:44.073523 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a12d46a3-06e3-4d54-91bd-7e5493755a22","Type":"ContainerStarted","Data":"64f49b493d608a7d4cde96889d4df911508c9e417114ef82ebee20cac14dc5ad"} Jan 26 10:57:44 crc kubenswrapper[4619]: I0126 10:57:44.085548 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a241b5bb-7416-4e64-a2d6-2cf6145fea80","Type":"ContainerStarted","Data":"dcc5212eb4ecdace2ce210d7d2cb36d3c0d8069c43a7b7f89bf3d5c22d918466"} Jan 26 10:57:44 crc kubenswrapper[4619]: I0126 10:57:44.090905 4619 generic.go:334] "Generic (PLEG): container finished" podID="c8f86935-b38f-41f5-a236-0d09213a5077" containerID="969c872876824c81d408254faa0c3cbb9ae80b7f9870a0f75fb50d9352e96f70" exitCode=0 Jan 26 10:57:44 crc kubenswrapper[4619]: I0126 10:57:44.091671 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wknfd" event={"ID":"c8f86935-b38f-41f5-a236-0d09213a5077","Type":"ContainerDied","Data":"969c872876824c81d408254faa0c3cbb9ae80b7f9870a0f75fb50d9352e96f70"} Jan 26 10:57:44 crc kubenswrapper[4619]: I0126 10:57:44.130460 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=4.130440005 podStartE2EDuration="4.130440005s" podCreationTimestamp="2026-01-26 10:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:44.129964712 +0000 UTC m=+163.164005438" watchObservedRunningTime="2026-01-26 10:57:44.130440005 +0000 UTC m=+163.164480721" Jan 26 10:57:44 crc kubenswrapper[4619]: I0126 10:57:44.150990 4619 patch_prober.go:28] interesting pod/router-default-5444994796-snrcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 10:57:44 crc kubenswrapper[4619]: [-]has-synced failed: reason withheld Jan 26 10:57:44 crc kubenswrapper[4619]: [+]process-running ok Jan 26 10:57:44 crc kubenswrapper[4619]: healthz check failed Jan 26 10:57:44 crc kubenswrapper[4619]: I0126 10:57:44.151071 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snrcm" podUID="3865e56a-bf16-4cd0-b7c1-f74e98381c5b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 10:57:44 crc kubenswrapper[4619]: I0126 10:57:44.234719 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 10:57:44 crc kubenswrapper[4619]: I0126 10:57:44.234783 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 10:57:44 crc kubenswrapper[4619]: I0126 10:57:44.283936 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-8k974" Jan 26 10:57:45 crc kubenswrapper[4619]: I0126 10:57:45.132275 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a12d46a3-06e3-4d54-91bd-7e5493755a22","Type":"ContainerStarted","Data":"00c07ab599d501b4f68f68a0ba1a3de60c18d33c03659585b15f7262b284c900"} Jan 26 10:57:45 crc kubenswrapper[4619]: I0126 10:57:45.138742 4619 generic.go:334] "Generic (PLEG): container finished" podID="a241b5bb-7416-4e64-a2d6-2cf6145fea80" containerID="dcc5212eb4ecdace2ce210d7d2cb36d3c0d8069c43a7b7f89bf3d5c22d918466" exitCode=0 Jan 26 10:57:45 crc kubenswrapper[4619]: I0126 10:57:45.138788 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a241b5bb-7416-4e64-a2d6-2cf6145fea80","Type":"ContainerDied","Data":"dcc5212eb4ecdace2ce210d7d2cb36d3c0d8069c43a7b7f89bf3d5c22d918466"} Jan 26 10:57:45 crc kubenswrapper[4619]: I0126 10:57:45.168091 4619 patch_prober.go:28] interesting pod/router-default-5444994796-snrcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 10:57:45 crc kubenswrapper[4619]: [-]has-synced failed: reason withheld Jan 26 10:57:45 crc kubenswrapper[4619]: [+]process-running ok Jan 26 10:57:45 crc kubenswrapper[4619]: healthz check failed Jan 26 10:57:45 crc kubenswrapper[4619]: I0126 10:57:45.168484 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snrcm" podUID="3865e56a-bf16-4cd0-b7c1-f74e98381c5b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 10:57:45 crc kubenswrapper[4619]: I0126 10:57:45.196445 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=4.196413745 podStartE2EDuration="4.196413745s" podCreationTimestamp="2026-01-26 10:57:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:45.164183459 +0000 UTC m=+164.198224175" watchObservedRunningTime="2026-01-26 10:57:45.196413745 +0000 UTC m=+164.230454461" Jan 26 10:57:46 crc kubenswrapper[4619]: I0126 10:57:46.149774 4619 patch_prober.go:28] interesting pod/router-default-5444994796-snrcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 10:57:46 crc kubenswrapper[4619]: [-]has-synced failed: reason withheld Jan 26 10:57:46 crc kubenswrapper[4619]: [+]process-running ok Jan 26 10:57:46 crc kubenswrapper[4619]: healthz check failed Jan 26 10:57:46 crc kubenswrapper[4619]: I0126 10:57:46.149843 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snrcm" podUID="3865e56a-bf16-4cd0-b7c1-f74e98381c5b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 10:57:46 crc kubenswrapper[4619]: I0126 10:57:46.165385 4619 generic.go:334] "Generic (PLEG): container finished" podID="a12d46a3-06e3-4d54-91bd-7e5493755a22" containerID="00c07ab599d501b4f68f68a0ba1a3de60c18d33c03659585b15f7262b284c900" exitCode=0 Jan 26 10:57:46 crc kubenswrapper[4619]: I0126 10:57:46.165481 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a12d46a3-06e3-4d54-91bd-7e5493755a22","Type":"ContainerDied","Data":"00c07ab599d501b4f68f68a0ba1a3de60c18d33c03659585b15f7262b284c900"} Jan 26 10:57:46 crc kubenswrapper[4619]: I0126 10:57:46.530321 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 10:57:46 crc kubenswrapper[4619]: I0126 10:57:46.617305 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a241b5bb-7416-4e64-a2d6-2cf6145fea80-kubelet-dir\") pod \"a241b5bb-7416-4e64-a2d6-2cf6145fea80\" (UID: \"a241b5bb-7416-4e64-a2d6-2cf6145fea80\") " Jan 26 10:57:46 crc kubenswrapper[4619]: I0126 10:57:46.617397 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a241b5bb-7416-4e64-a2d6-2cf6145fea80-kube-api-access\") pod \"a241b5bb-7416-4e64-a2d6-2cf6145fea80\" (UID: \"a241b5bb-7416-4e64-a2d6-2cf6145fea80\") " Jan 26 10:57:46 crc kubenswrapper[4619]: I0126 10:57:46.617659 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a4ef536-778e-47e5-afb2-539e96eba778-metrics-certs\") pod \"network-metrics-daemon-bs2t7\" (UID: \"6a4ef536-778e-47e5-afb2-539e96eba778\") " pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:57:46 crc kubenswrapper[4619]: I0126 10:57:46.618324 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a241b5bb-7416-4e64-a2d6-2cf6145fea80-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a241b5bb-7416-4e64-a2d6-2cf6145fea80" (UID: "a241b5bb-7416-4e64-a2d6-2cf6145fea80"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 10:57:46 crc kubenswrapper[4619]: I0126 10:57:46.625150 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a241b5bb-7416-4e64-a2d6-2cf6145fea80-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a241b5bb-7416-4e64-a2d6-2cf6145fea80" (UID: "a241b5bb-7416-4e64-a2d6-2cf6145fea80"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:57:46 crc kubenswrapper[4619]: I0126 10:57:46.641525 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a4ef536-778e-47e5-afb2-539e96eba778-metrics-certs\") pod \"network-metrics-daemon-bs2t7\" (UID: \"6a4ef536-778e-47e5-afb2-539e96eba778\") " pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:57:46 crc kubenswrapper[4619]: I0126 10:57:46.709206 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs2t7" Jan 26 10:57:46 crc kubenswrapper[4619]: I0126 10:57:46.719001 4619 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a241b5bb-7416-4e64-a2d6-2cf6145fea80-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 10:57:46 crc kubenswrapper[4619]: I0126 10:57:46.719058 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a241b5bb-7416-4e64-a2d6-2cf6145fea80-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 10:57:47 crc kubenswrapper[4619]: I0126 10:57:47.152702 4619 patch_prober.go:28] interesting pod/router-default-5444994796-snrcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 10:57:47 crc kubenswrapper[4619]: [-]has-synced failed: reason withheld Jan 26 10:57:47 crc kubenswrapper[4619]: [+]process-running ok Jan 26 10:57:47 crc kubenswrapper[4619]: healthz check failed Jan 26 10:57:47 crc kubenswrapper[4619]: I0126 10:57:47.153109 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snrcm" podUID="3865e56a-bf16-4cd0-b7c1-f74e98381c5b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 10:57:47 crc kubenswrapper[4619]: I0126 10:57:47.219271 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 10:57:47 crc kubenswrapper[4619]: I0126 10:57:47.219975 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a241b5bb-7416-4e64-a2d6-2cf6145fea80","Type":"ContainerDied","Data":"4dfbdbec686787908f89a53dce212fba2f89ef613996bc00f82cdaf6ce681c2c"} Jan 26 10:57:47 crc kubenswrapper[4619]: I0126 10:57:47.220072 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dfbdbec686787908f89a53dce212fba2f89ef613996bc00f82cdaf6ce681c2c" Jan 26 10:57:47 crc kubenswrapper[4619]: I0126 10:57:47.229064 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bs2t7"] Jan 26 10:57:47 crc kubenswrapper[4619]: W0126 10:57:47.260768 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a4ef536_778e_47e5_afb2_539e96eba778.slice/crio-7e10762f41ff8d0618ec19713799b842eca41c9af9735c77b8ab11f2eb52457f WatchSource:0}: Error finding container 7e10762f41ff8d0618ec19713799b842eca41c9af9735c77b8ab11f2eb52457f: Status 404 returned error can't find the container with id 7e10762f41ff8d0618ec19713799b842eca41c9af9735c77b8ab11f2eb52457f Jan 26 10:57:47 crc kubenswrapper[4619]: I0126 10:57:47.589443 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 10:57:47 crc kubenswrapper[4619]: I0126 10:57:47.654582 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a12d46a3-06e3-4d54-91bd-7e5493755a22-kube-api-access\") pod \"a12d46a3-06e3-4d54-91bd-7e5493755a22\" (UID: \"a12d46a3-06e3-4d54-91bd-7e5493755a22\") " Jan 26 10:57:47 crc kubenswrapper[4619]: I0126 10:57:47.654648 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a12d46a3-06e3-4d54-91bd-7e5493755a22-kubelet-dir\") pod \"a12d46a3-06e3-4d54-91bd-7e5493755a22\" (UID: \"a12d46a3-06e3-4d54-91bd-7e5493755a22\") " Jan 26 10:57:47 crc kubenswrapper[4619]: I0126 10:57:47.654884 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a12d46a3-06e3-4d54-91bd-7e5493755a22-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a12d46a3-06e3-4d54-91bd-7e5493755a22" (UID: "a12d46a3-06e3-4d54-91bd-7e5493755a22"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 10:57:47 crc kubenswrapper[4619]: I0126 10:57:47.655679 4619 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a12d46a3-06e3-4d54-91bd-7e5493755a22-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 10:57:47 crc kubenswrapper[4619]: I0126 10:57:47.661674 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a12d46a3-06e3-4d54-91bd-7e5493755a22-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a12d46a3-06e3-4d54-91bd-7e5493755a22" (UID: "a12d46a3-06e3-4d54-91bd-7e5493755a22"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:57:47 crc kubenswrapper[4619]: I0126 10:57:47.756965 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a12d46a3-06e3-4d54-91bd-7e5493755a22-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 10:57:47 crc kubenswrapper[4619]: I0126 10:57:47.993534 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-k4prd" Jan 26 10:57:48 crc kubenswrapper[4619]: I0126 10:57:48.151530 4619 patch_prober.go:28] interesting pod/router-default-5444994796-snrcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 10:57:48 crc kubenswrapper[4619]: [-]has-synced failed: reason withheld Jan 26 10:57:48 crc kubenswrapper[4619]: [+]process-running ok Jan 26 10:57:48 crc kubenswrapper[4619]: healthz check failed Jan 26 10:57:48 crc kubenswrapper[4619]: I0126 10:57:48.152188 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snrcm" podUID="3865e56a-bf16-4cd0-b7c1-f74e98381c5b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 10:57:48 crc kubenswrapper[4619]: I0126 10:57:48.240016 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a12d46a3-06e3-4d54-91bd-7e5493755a22","Type":"ContainerDied","Data":"64f49b493d608a7d4cde96889d4df911508c9e417114ef82ebee20cac14dc5ad"} Jan 26 10:57:48 crc kubenswrapper[4619]: I0126 10:57:48.240067 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64f49b493d608a7d4cde96889d4df911508c9e417114ef82ebee20cac14dc5ad" Jan 26 10:57:48 crc kubenswrapper[4619]: I0126 10:57:48.240184 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 10:57:48 crc kubenswrapper[4619]: I0126 10:57:48.268926 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bs2t7" event={"ID":"6a4ef536-778e-47e5-afb2-539e96eba778","Type":"ContainerStarted","Data":"7e10762f41ff8d0618ec19713799b842eca41c9af9735c77b8ab11f2eb52457f"} Jan 26 10:57:48 crc kubenswrapper[4619]: I0126 10:57:48.497405 4619 patch_prober.go:28] interesting pod/console-f9d7485db-zdjpz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 26 10:57:48 crc kubenswrapper[4619]: I0126 10:57:48.497563 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-zdjpz" podUID="3b59d0dd-baeb-4a81-989b-7ee68bfa06aa" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 26 10:57:49 crc kubenswrapper[4619]: I0126 10:57:49.150955 4619 patch_prober.go:28] interesting pod/router-default-5444994796-snrcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 10:57:49 crc kubenswrapper[4619]: [-]has-synced failed: reason withheld Jan 26 10:57:49 crc kubenswrapper[4619]: [+]process-running ok Jan 26 10:57:49 crc kubenswrapper[4619]: healthz check failed Jan 26 10:57:49 crc kubenswrapper[4619]: I0126 10:57:49.151050 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snrcm" podUID="3865e56a-bf16-4cd0-b7c1-f74e98381c5b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 10:57:50 crc kubenswrapper[4619]: I0126 10:57:50.151161 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-snrcm" Jan 26 10:57:50 crc kubenswrapper[4619]: I0126 10:57:50.160429 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-snrcm" Jan 26 10:57:50 crc kubenswrapper[4619]: I0126 10:57:50.751960 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bs2t7" event={"ID":"6a4ef536-778e-47e5-afb2-539e96eba778","Type":"ContainerStarted","Data":"fe9fa77f7401d70c11f28376e0b6bd7fde406d01cb21d8b24c8802d08536fa05"} Jan 26 10:57:51 crc kubenswrapper[4619]: I0126 10:57:51.833444 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bs2t7" event={"ID":"6a4ef536-778e-47e5-afb2-539e96eba778","Type":"ContainerStarted","Data":"2853daa2007546478cb9e28efa0cd00422487ab697ea11f39d81c787f7c02829"} Jan 26 10:57:51 crc kubenswrapper[4619]: I0126 10:57:51.860801 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-bs2t7" podStartSLOduration=147.860758529 podStartE2EDuration="2m27.860758529s" podCreationTimestamp="2026-01-26 10:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:57:51.853233614 +0000 UTC m=+170.887274330" watchObservedRunningTime="2026-01-26 10:57:51.860758529 +0000 UTC m=+170.894799265" Jan 26 10:57:52 crc kubenswrapper[4619]: I0126 10:57:52.850638 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-5fqdl_0026923c-f271-4a2c-864b-71f051a5f093/cluster-samples-operator/0.log" Jan 26 10:57:52 crc kubenswrapper[4619]: I0126 10:57:52.851329 4619 generic.go:334] "Generic (PLEG): container finished" podID="0026923c-f271-4a2c-864b-71f051a5f093" containerID="eb115a326beee57ef2f6a43c1fb8c744a5fba7b621cc7aad43e6059c2e941d3c" exitCode=2 Jan 26 10:57:52 crc kubenswrapper[4619]: I0126 10:57:52.851416 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fqdl" event={"ID":"0026923c-f271-4a2c-864b-71f051a5f093","Type":"ContainerDied","Data":"eb115a326beee57ef2f6a43c1fb8c744a5fba7b621cc7aad43e6059c2e941d3c"} Jan 26 10:57:52 crc kubenswrapper[4619]: I0126 10:57:52.851882 4619 scope.go:117] "RemoveContainer" containerID="eb115a326beee57ef2f6a43c1fb8c744a5fba7b621cc7aad43e6059c2e941d3c" Jan 26 10:57:58 crc kubenswrapper[4619]: I0126 10:57:58.513272 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-zdjpz" Jan 26 10:57:58 crc kubenswrapper[4619]: I0126 10:57:58.519227 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-zdjpz" Jan 26 10:58:00 crc kubenswrapper[4619]: I0126 10:58:00.180332 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 10:58:07 crc kubenswrapper[4619]: I0126 10:58:07.220746 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 10:58:08 crc kubenswrapper[4619]: I0126 10:58:08.976739 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-shltb" Jan 26 10:58:14 crc kubenswrapper[4619]: I0126 10:58:14.234480 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 10:58:14 crc kubenswrapper[4619]: I0126 10:58:14.235298 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 10:58:17 crc kubenswrapper[4619]: E0126 10:58:17.808453 4619 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 10:58:17 crc kubenswrapper[4619]: E0126 10:58:17.808910 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-86fcs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-wknfd_openshift-marketplace(c8f86935-b38f-41f5-a236-0d09213a5077): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 10:58:17 crc kubenswrapper[4619]: E0126 10:58:17.810115 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-wknfd" podUID="c8f86935-b38f-41f5-a236-0d09213a5077" Jan 26 10:58:17 crc kubenswrapper[4619]: E0126 10:58:17.855022 4619 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 10:58:17 crc kubenswrapper[4619]: E0126 10:58:17.855197 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f4ql9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-7mntj_openshift-marketplace(74eaac61-fd26-4596-ab74-d1282c0baf2b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 10:58:17 crc kubenswrapper[4619]: E0126 10:58:17.856405 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-7mntj" podUID="74eaac61-fd26-4596-ab74-d1282c0baf2b" Jan 26 10:58:20 crc kubenswrapper[4619]: E0126 10:58:20.467346 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-wknfd" podUID="c8f86935-b38f-41f5-a236-0d09213a5077" Jan 26 10:58:20 crc kubenswrapper[4619]: E0126 10:58:20.467370 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7mntj" podUID="74eaac61-fd26-4596-ab74-d1282c0baf2b" Jan 26 10:58:20 crc kubenswrapper[4619]: E0126 10:58:20.528741 4619 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 10:58:20 crc kubenswrapper[4619]: E0126 10:58:20.528930 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sbk9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-bsqjj_openshift-marketplace(7a310950-4656-4955-b453-e846f29f47d8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 10:58:20 crc kubenswrapper[4619]: E0126 10:58:20.530172 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-bsqjj" podUID="7a310950-4656-4955-b453-e846f29f47d8" Jan 26 10:58:22 crc kubenswrapper[4619]: I0126 10:58:22.785578 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 10:58:22 crc kubenswrapper[4619]: E0126 10:58:22.785835 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a241b5bb-7416-4e64-a2d6-2cf6145fea80" containerName="pruner" Jan 26 10:58:22 crc kubenswrapper[4619]: I0126 10:58:22.785848 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="a241b5bb-7416-4e64-a2d6-2cf6145fea80" containerName="pruner" Jan 26 10:58:22 crc kubenswrapper[4619]: E0126 10:58:22.785863 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a12d46a3-06e3-4d54-91bd-7e5493755a22" containerName="pruner" Jan 26 10:58:22 crc kubenswrapper[4619]: I0126 10:58:22.785869 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="a12d46a3-06e3-4d54-91bd-7e5493755a22" containerName="pruner" Jan 26 10:58:22 crc kubenswrapper[4619]: I0126 10:58:22.785965 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="a241b5bb-7416-4e64-a2d6-2cf6145fea80" containerName="pruner" Jan 26 10:58:22 crc kubenswrapper[4619]: I0126 10:58:22.785974 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="a12d46a3-06e3-4d54-91bd-7e5493755a22" containerName="pruner" Jan 26 10:58:22 crc kubenswrapper[4619]: I0126 10:58:22.786321 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 10:58:22 crc kubenswrapper[4619]: I0126 10:58:22.790080 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 10:58:22 crc kubenswrapper[4619]: I0126 10:58:22.791100 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 10:58:22 crc kubenswrapper[4619]: I0126 10:58:22.827877 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 10:58:22 crc kubenswrapper[4619]: I0126 10:58:22.946912 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06192c83-9f64-4e29-909f-09182bc64f21-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"06192c83-9f64-4e29-909f-09182bc64f21\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 10:58:22 crc kubenswrapper[4619]: I0126 10:58:22.947002 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06192c83-9f64-4e29-909f-09182bc64f21-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"06192c83-9f64-4e29-909f-09182bc64f21\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 10:58:23 crc kubenswrapper[4619]: I0126 10:58:23.048471 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06192c83-9f64-4e29-909f-09182bc64f21-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"06192c83-9f64-4e29-909f-09182bc64f21\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 10:58:23 crc kubenswrapper[4619]: I0126 10:58:23.048563 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06192c83-9f64-4e29-909f-09182bc64f21-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"06192c83-9f64-4e29-909f-09182bc64f21\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 10:58:23 crc kubenswrapper[4619]: I0126 10:58:23.048924 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06192c83-9f64-4e29-909f-09182bc64f21-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"06192c83-9f64-4e29-909f-09182bc64f21\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 10:58:23 crc kubenswrapper[4619]: I0126 10:58:23.075044 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06192c83-9f64-4e29-909f-09182bc64f21-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"06192c83-9f64-4e29-909f-09182bc64f21\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 10:58:23 crc kubenswrapper[4619]: I0126 10:58:23.109231 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 10:58:23 crc kubenswrapper[4619]: E0126 10:58:23.384415 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bsqjj" podUID="7a310950-4656-4955-b453-e846f29f47d8" Jan 26 10:58:23 crc kubenswrapper[4619]: E0126 10:58:23.504397 4619 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 10:58:23 crc kubenswrapper[4619]: E0126 10:58:23.504566 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-467ds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-tvhzw_openshift-marketplace(3d6d6055-6441-4f47-8107-8886901691cc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 10:58:23 crc kubenswrapper[4619]: E0126 10:58:23.519572 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-tvhzw" podUID="3d6d6055-6441-4f47-8107-8886901691cc" Jan 26 10:58:23 crc kubenswrapper[4619]: E0126 10:58:23.583490 4619 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 10:58:23 crc kubenswrapper[4619]: E0126 10:58:23.583768 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xb2j6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-krm7h_openshift-marketplace(ac07b4ac-4523-451f-91c0-9c4754786fce): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 10:58:23 crc kubenswrapper[4619]: E0126 10:58:23.585311 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-krm7h" podUID="ac07b4ac-4523-451f-91c0-9c4754786fce" Jan 26 10:58:23 crc kubenswrapper[4619]: E0126 10:58:23.589566 4619 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 10:58:23 crc kubenswrapper[4619]: E0126 10:58:23.590036 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6j8xw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-jlwxk_openshift-marketplace(a956e8a2-fc10-4698-a40f-71503dbd4542): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 10:58:23 crc kubenswrapper[4619]: E0126 10:58:23.592135 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-jlwxk" podUID="a956e8a2-fc10-4698-a40f-71503dbd4542" Jan 26 10:58:23 crc kubenswrapper[4619]: E0126 10:58:23.597353 4619 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 10:58:23 crc kubenswrapper[4619]: E0126 10:58:23.597505 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pfxc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-bf7nn_openshift-marketplace(b13c3450-3323-4a93-912f-e727bf9e75f3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 10:58:23 crc kubenswrapper[4619]: E0126 10:58:23.599318 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-bf7nn" podUID="b13c3450-3323-4a93-912f-e727bf9e75f3" Jan 26 10:58:23 crc kubenswrapper[4619]: I0126 10:58:23.964218 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 10:58:24 crc kubenswrapper[4619]: I0126 10:58:24.121023 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-5fqdl_0026923c-f271-4a2c-864b-71f051a5f093/cluster-samples-operator/0.log" Jan 26 10:58:24 crc kubenswrapper[4619]: I0126 10:58:24.121120 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5fqdl" event={"ID":"0026923c-f271-4a2c-864b-71f051a5f093","Type":"ContainerStarted","Data":"6a5dcda3dfff1ce231e818fe03d82c76e7d506decc4daeef934887c287ec0441"} Jan 26 10:58:24 crc kubenswrapper[4619]: I0126 10:58:24.124092 4619 generic.go:334] "Generic (PLEG): container finished" podID="722ff7bc-6563-4562-b96f-430b1b2fedd1" containerID="7383259c69cf74f769fcc835d0d4c6d450611f9d01309c4ef2f1101e410ba6aa" exitCode=0 Jan 26 10:58:24 crc kubenswrapper[4619]: I0126 10:58:24.124374 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlrvs" event={"ID":"722ff7bc-6563-4562-b96f-430b1b2fedd1","Type":"ContainerDied","Data":"7383259c69cf74f769fcc835d0d4c6d450611f9d01309c4ef2f1101e410ba6aa"} Jan 26 10:58:24 crc kubenswrapper[4619]: I0126 10:58:24.127683 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"06192c83-9f64-4e29-909f-09182bc64f21","Type":"ContainerStarted","Data":"0509196cfc4f29217691025dc04fcc8af7e1d5b84d338a18f099990087e6fe06"} Jan 26 10:58:24 crc kubenswrapper[4619]: E0126 10:58:24.129888 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-krm7h" podUID="ac07b4ac-4523-451f-91c0-9c4754786fce" Jan 26 10:58:24 crc kubenswrapper[4619]: E0126 10:58:24.130311 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bf7nn" podUID="b13c3450-3323-4a93-912f-e727bf9e75f3" Jan 26 10:58:24 crc kubenswrapper[4619]: E0126 10:58:24.133691 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-jlwxk" podUID="a956e8a2-fc10-4698-a40f-71503dbd4542" Jan 26 10:58:24 crc kubenswrapper[4619]: E0126 10:58:24.144695 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-tvhzw" podUID="3d6d6055-6441-4f47-8107-8886901691cc" Jan 26 10:58:25 crc kubenswrapper[4619]: I0126 10:58:25.136697 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlrvs" event={"ID":"722ff7bc-6563-4562-b96f-430b1b2fedd1","Type":"ContainerStarted","Data":"d88bfce001571cbcc69a65143cbf6b4836cbee73bcdfeb94194e87948b7acfbf"} Jan 26 10:58:25 crc kubenswrapper[4619]: I0126 10:58:25.138312 4619 generic.go:334] "Generic (PLEG): container finished" podID="06192c83-9f64-4e29-909f-09182bc64f21" containerID="946f7336940a006986d3daa944801e87a62c0160d6206d9990876a90ece99ca1" exitCode=0 Jan 26 10:58:25 crc kubenswrapper[4619]: I0126 10:58:25.138347 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"06192c83-9f64-4e29-909f-09182bc64f21","Type":"ContainerDied","Data":"946f7336940a006986d3daa944801e87a62c0160d6206d9990876a90ece99ca1"} Jan 26 10:58:25 crc kubenswrapper[4619]: I0126 10:58:25.156888 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zlrvs" podStartSLOduration=4.202643687 podStartE2EDuration="48.156864729s" podCreationTimestamp="2026-01-26 10:57:37 +0000 UTC" firstStartedPulling="2026-01-26 10:57:40.579734356 +0000 UTC m=+159.613775072" lastFinishedPulling="2026-01-26 10:58:24.533955398 +0000 UTC m=+203.567996114" observedRunningTime="2026-01-26 10:58:25.152053329 +0000 UTC m=+204.186094045" watchObservedRunningTime="2026-01-26 10:58:25.156864729 +0000 UTC m=+204.190905455" Jan 26 10:58:26 crc kubenswrapper[4619]: I0126 10:58:26.399233 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 10:58:26 crc kubenswrapper[4619]: I0126 10:58:26.521316 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06192c83-9f64-4e29-909f-09182bc64f21-kubelet-dir\") pod \"06192c83-9f64-4e29-909f-09182bc64f21\" (UID: \"06192c83-9f64-4e29-909f-09182bc64f21\") " Jan 26 10:58:26 crc kubenswrapper[4619]: I0126 10:58:26.521503 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06192c83-9f64-4e29-909f-09182bc64f21-kube-api-access\") pod \"06192c83-9f64-4e29-909f-09182bc64f21\" (UID: \"06192c83-9f64-4e29-909f-09182bc64f21\") " Jan 26 10:58:26 crc kubenswrapper[4619]: I0126 10:58:26.521671 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06192c83-9f64-4e29-909f-09182bc64f21-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "06192c83-9f64-4e29-909f-09182bc64f21" (UID: "06192c83-9f64-4e29-909f-09182bc64f21"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 10:58:26 crc kubenswrapper[4619]: I0126 10:58:26.526755 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06192c83-9f64-4e29-909f-09182bc64f21-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "06192c83-9f64-4e29-909f-09182bc64f21" (UID: "06192c83-9f64-4e29-909f-09182bc64f21"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:58:26 crc kubenswrapper[4619]: I0126 10:58:26.623252 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06192c83-9f64-4e29-909f-09182bc64f21-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 10:58:26 crc kubenswrapper[4619]: I0126 10:58:26.623296 4619 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06192c83-9f64-4e29-909f-09182bc64f21-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 10:58:27 crc kubenswrapper[4619]: I0126 10:58:27.149458 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"06192c83-9f64-4e29-909f-09182bc64f21","Type":"ContainerDied","Data":"0509196cfc4f29217691025dc04fcc8af7e1d5b84d338a18f099990087e6fe06"} Jan 26 10:58:27 crc kubenswrapper[4619]: I0126 10:58:27.149506 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0509196cfc4f29217691025dc04fcc8af7e1d5b84d338a18f099990087e6fe06" Jan 26 10:58:27 crc kubenswrapper[4619]: I0126 10:58:27.149587 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 10:58:28 crc kubenswrapper[4619]: I0126 10:58:28.094527 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zlrvs" Jan 26 10:58:28 crc kubenswrapper[4619]: I0126 10:58:28.094970 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zlrvs" Jan 26 10:58:28 crc kubenswrapper[4619]: I0126 10:58:28.163744 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zlrvs" Jan 26 10:58:30 crc kubenswrapper[4619]: I0126 10:58:30.596311 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 10:58:30 crc kubenswrapper[4619]: E0126 10:58:30.597950 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06192c83-9f64-4e29-909f-09182bc64f21" containerName="pruner" Jan 26 10:58:30 crc kubenswrapper[4619]: I0126 10:58:30.598087 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="06192c83-9f64-4e29-909f-09182bc64f21" containerName="pruner" Jan 26 10:58:30 crc kubenswrapper[4619]: I0126 10:58:30.598339 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="06192c83-9f64-4e29-909f-09182bc64f21" containerName="pruner" Jan 26 10:58:30 crc kubenswrapper[4619]: I0126 10:58:30.599102 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 10:58:30 crc kubenswrapper[4619]: I0126 10:58:30.602273 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 10:58:30 crc kubenswrapper[4619]: I0126 10:58:30.602713 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 10:58:30 crc kubenswrapper[4619]: I0126 10:58:30.622044 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 10:58:30 crc kubenswrapper[4619]: I0126 10:58:30.730248 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebe1934c-ce73-47cb-8246-ce6f47742100-var-lock\") pod \"installer-9-crc\" (UID: \"ebe1934c-ce73-47cb-8246-ce6f47742100\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 10:58:30 crc kubenswrapper[4619]: I0126 10:58:30.730348 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebe1934c-ce73-47cb-8246-ce6f47742100-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ebe1934c-ce73-47cb-8246-ce6f47742100\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 10:58:30 crc kubenswrapper[4619]: I0126 10:58:30.730370 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ebe1934c-ce73-47cb-8246-ce6f47742100-kube-api-access\") pod \"installer-9-crc\" (UID: \"ebe1934c-ce73-47cb-8246-ce6f47742100\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 10:58:30 crc kubenswrapper[4619]: I0126 10:58:30.831650 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebe1934c-ce73-47cb-8246-ce6f47742100-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ebe1934c-ce73-47cb-8246-ce6f47742100\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 10:58:30 crc kubenswrapper[4619]: I0126 10:58:30.831714 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ebe1934c-ce73-47cb-8246-ce6f47742100-kube-api-access\") pod \"installer-9-crc\" (UID: \"ebe1934c-ce73-47cb-8246-ce6f47742100\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 10:58:30 crc kubenswrapper[4619]: I0126 10:58:30.831773 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebe1934c-ce73-47cb-8246-ce6f47742100-var-lock\") pod \"installer-9-crc\" (UID: \"ebe1934c-ce73-47cb-8246-ce6f47742100\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 10:58:30 crc kubenswrapper[4619]: I0126 10:58:30.831919 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebe1934c-ce73-47cb-8246-ce6f47742100-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ebe1934c-ce73-47cb-8246-ce6f47742100\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 10:58:30 crc kubenswrapper[4619]: I0126 10:58:30.832292 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebe1934c-ce73-47cb-8246-ce6f47742100-var-lock\") pod \"installer-9-crc\" (UID: \"ebe1934c-ce73-47cb-8246-ce6f47742100\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 10:58:30 crc kubenswrapper[4619]: I0126 10:58:30.865992 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ebe1934c-ce73-47cb-8246-ce6f47742100-kube-api-access\") pod \"installer-9-crc\" (UID: \"ebe1934c-ce73-47cb-8246-ce6f47742100\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 10:58:30 crc kubenswrapper[4619]: I0126 10:58:30.914834 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 10:58:31 crc kubenswrapper[4619]: I0126 10:58:31.349476 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 10:58:31 crc kubenswrapper[4619]: W0126 10:58:31.351776 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podebe1934c_ce73_47cb_8246_ce6f47742100.slice/crio-3b6caa316a8704f861f915bcf6746a265af41119da454dabe7349279d483b9a6 WatchSource:0}: Error finding container 3b6caa316a8704f861f915bcf6746a265af41119da454dabe7349279d483b9a6: Status 404 returned error can't find the container with id 3b6caa316a8704f861f915bcf6746a265af41119da454dabe7349279d483b9a6 Jan 26 10:58:32 crc kubenswrapper[4619]: I0126 10:58:32.194576 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ebe1934c-ce73-47cb-8246-ce6f47742100","Type":"ContainerStarted","Data":"3b6caa316a8704f861f915bcf6746a265af41119da454dabe7349279d483b9a6"} Jan 26 10:58:34 crc kubenswrapper[4619]: I0126 10:58:34.213229 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mntj" event={"ID":"74eaac61-fd26-4596-ab74-d1282c0baf2b","Type":"ContainerStarted","Data":"49c4c4b9437444610dd717888f8f37fcbc12fab3c5f8d912ca2832c6df6f31f5"} Jan 26 10:58:34 crc kubenswrapper[4619]: I0126 10:58:34.214886 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ebe1934c-ce73-47cb-8246-ce6f47742100","Type":"ContainerStarted","Data":"f6ff6a8cebcc517d4e07ac9f5a229f45570eabe45c11712f6c6f09417a799eaa"} Jan 26 10:58:34 crc kubenswrapper[4619]: I0126 10:58:34.217275 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wknfd" event={"ID":"c8f86935-b38f-41f5-a236-0d09213a5077","Type":"ContainerStarted","Data":"429eb4aded9d2971b4bb3ecf589bd72299f0e6ef9d258f04055efc3199aa537c"} Jan 26 10:58:34 crc kubenswrapper[4619]: I0126 10:58:34.256048 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=4.256022752 podStartE2EDuration="4.256022752s" podCreationTimestamp="2026-01-26 10:58:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:58:34.250595185 +0000 UTC m=+213.284635921" watchObservedRunningTime="2026-01-26 10:58:34.256022752 +0000 UTC m=+213.290063468" Jan 26 10:58:35 crc kubenswrapper[4619]: I0126 10:58:35.226261 4619 generic.go:334] "Generic (PLEG): container finished" podID="c8f86935-b38f-41f5-a236-0d09213a5077" containerID="429eb4aded9d2971b4bb3ecf589bd72299f0e6ef9d258f04055efc3199aa537c" exitCode=0 Jan 26 10:58:35 crc kubenswrapper[4619]: I0126 10:58:35.226381 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wknfd" event={"ID":"c8f86935-b38f-41f5-a236-0d09213a5077","Type":"ContainerDied","Data":"429eb4aded9d2971b4bb3ecf589bd72299f0e6ef9d258f04055efc3199aa537c"} Jan 26 10:58:35 crc kubenswrapper[4619]: I0126 10:58:35.229307 4619 generic.go:334] "Generic (PLEG): container finished" podID="74eaac61-fd26-4596-ab74-d1282c0baf2b" containerID="49c4c4b9437444610dd717888f8f37fcbc12fab3c5f8d912ca2832c6df6f31f5" exitCode=0 Jan 26 10:58:35 crc kubenswrapper[4619]: I0126 10:58:35.229722 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mntj" event={"ID":"74eaac61-fd26-4596-ab74-d1282c0baf2b","Type":"ContainerDied","Data":"49c4c4b9437444610dd717888f8f37fcbc12fab3c5f8d912ca2832c6df6f31f5"} Jan 26 10:58:36 crc kubenswrapper[4619]: I0126 10:58:36.253302 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bf7nn" event={"ID":"b13c3450-3323-4a93-912f-e727bf9e75f3","Type":"ContainerStarted","Data":"8b56f5076865c72f3392a9f68047ef9d45b54eedcf7d476b2696ed18f371c880"} Jan 26 10:58:36 crc kubenswrapper[4619]: I0126 10:58:36.262026 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wknfd" event={"ID":"c8f86935-b38f-41f5-a236-0d09213a5077","Type":"ContainerStarted","Data":"b3b86d69cd1710858e41f02dfca848a5c1311bc07d1a3c60a7a61b8be869501d"} Jan 26 10:58:36 crc kubenswrapper[4619]: I0126 10:58:36.265934 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mntj" event={"ID":"74eaac61-fd26-4596-ab74-d1282c0baf2b","Type":"ContainerStarted","Data":"968d33e5def046d9d75f7d642b861e9fb5e10fa0b069db29f9cca1fdb699290e"} Jan 26 10:58:36 crc kubenswrapper[4619]: I0126 10:58:36.304333 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wknfd" podStartSLOduration=2.604313416 podStartE2EDuration="55.304313121s" podCreationTimestamp="2026-01-26 10:57:41 +0000 UTC" firstStartedPulling="2026-01-26 10:57:43.011153031 +0000 UTC m=+162.045193757" lastFinishedPulling="2026-01-26 10:58:35.711152746 +0000 UTC m=+214.745193462" observedRunningTime="2026-01-26 10:58:36.301210487 +0000 UTC m=+215.335251203" watchObservedRunningTime="2026-01-26 10:58:36.304313121 +0000 UTC m=+215.338353837" Jan 26 10:58:36 crc kubenswrapper[4619]: I0126 10:58:36.321379 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7mntj" podStartSLOduration=3.689401995 podStartE2EDuration="56.321360533s" podCreationTimestamp="2026-01-26 10:57:40 +0000 UTC" firstStartedPulling="2026-01-26 10:57:43.043741366 +0000 UTC m=+162.077782082" lastFinishedPulling="2026-01-26 10:58:35.675699904 +0000 UTC m=+214.709740620" observedRunningTime="2026-01-26 10:58:36.31976264 +0000 UTC m=+215.353803356" watchObservedRunningTime="2026-01-26 10:58:36.321360533 +0000 UTC m=+215.355401249" Jan 26 10:58:37 crc kubenswrapper[4619]: I0126 10:58:37.271989 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsqjj" event={"ID":"7a310950-4656-4955-b453-e846f29f47d8","Type":"ContainerStarted","Data":"ca95cf416677ce9902bd2ff6a9511be001014a64daa012fef265df83ef65cc66"} Jan 26 10:58:37 crc kubenswrapper[4619]: I0126 10:58:37.276711 4619 generic.go:334] "Generic (PLEG): container finished" podID="b13c3450-3323-4a93-912f-e727bf9e75f3" containerID="8b56f5076865c72f3392a9f68047ef9d45b54eedcf7d476b2696ed18f371c880" exitCode=0 Jan 26 10:58:37 crc kubenswrapper[4619]: I0126 10:58:37.276741 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bf7nn" event={"ID":"b13c3450-3323-4a93-912f-e727bf9e75f3","Type":"ContainerDied","Data":"8b56f5076865c72f3392a9f68047ef9d45b54eedcf7d476b2696ed18f371c880"} Jan 26 10:58:37 crc kubenswrapper[4619]: I0126 10:58:37.276793 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bf7nn" event={"ID":"b13c3450-3323-4a93-912f-e727bf9e75f3","Type":"ContainerStarted","Data":"d46c5fe42f375a0964ceaad25dd7a2c549afd0bb3ce4b382b47324942283a894"} Jan 26 10:58:37 crc kubenswrapper[4619]: I0126 10:58:37.351955 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bf7nn" podStartSLOduration=4.238404061 podStartE2EDuration="1m0.351924562s" podCreationTimestamp="2026-01-26 10:57:37 +0000 UTC" firstStartedPulling="2026-01-26 10:57:40.642880063 +0000 UTC m=+159.676920779" lastFinishedPulling="2026-01-26 10:58:36.756400564 +0000 UTC m=+215.790441280" observedRunningTime="2026-01-26 10:58:37.333689097 +0000 UTC m=+216.367729833" watchObservedRunningTime="2026-01-26 10:58:37.351924562 +0000 UTC m=+216.385965278" Jan 26 10:58:38 crc kubenswrapper[4619]: I0126 10:58:38.138035 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zlrvs" Jan 26 10:58:38 crc kubenswrapper[4619]: I0126 10:58:38.283927 4619 generic.go:334] "Generic (PLEG): container finished" podID="7a310950-4656-4955-b453-e846f29f47d8" containerID="ca95cf416677ce9902bd2ff6a9511be001014a64daa012fef265df83ef65cc66" exitCode=0 Jan 26 10:58:38 crc kubenswrapper[4619]: I0126 10:58:38.284006 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsqjj" event={"ID":"7a310950-4656-4955-b453-e846f29f47d8","Type":"ContainerDied","Data":"ca95cf416677ce9902bd2ff6a9511be001014a64daa012fef265df83ef65cc66"} Jan 26 10:58:38 crc kubenswrapper[4619]: I0126 10:58:38.287417 4619 generic.go:334] "Generic (PLEG): container finished" podID="3d6d6055-6441-4f47-8107-8886901691cc" containerID="59f7945f3dab4bd31c278b496328a70387f7e8edaea69a7a270820e9fe7a352d" exitCode=0 Jan 26 10:58:38 crc kubenswrapper[4619]: I0126 10:58:38.287447 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvhzw" event={"ID":"3d6d6055-6441-4f47-8107-8886901691cc","Type":"ContainerDied","Data":"59f7945f3dab4bd31c278b496328a70387f7e8edaea69a7a270820e9fe7a352d"} Jan 26 10:58:38 crc kubenswrapper[4619]: I0126 10:58:38.678863 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bf7nn" Jan 26 10:58:38 crc kubenswrapper[4619]: I0126 10:58:38.678917 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bf7nn" Jan 26 10:58:39 crc kubenswrapper[4619]: I0126 10:58:39.294174 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jlwxk" event={"ID":"a956e8a2-fc10-4698-a40f-71503dbd4542","Type":"ContainerStarted","Data":"6b63b7c8cec551600b72d4e0b300058459ba94ecb71e1abc9c8b80444a8afca4"} Jan 26 10:58:39 crc kubenswrapper[4619]: I0126 10:58:39.719542 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-bf7nn" podUID="b13c3450-3323-4a93-912f-e727bf9e75f3" containerName="registry-server" probeResult="failure" output=< Jan 26 10:58:39 crc kubenswrapper[4619]: timeout: failed to connect service ":50051" within 1s Jan 26 10:58:39 crc kubenswrapper[4619]: > Jan 26 10:58:40 crc kubenswrapper[4619]: I0126 10:58:40.300942 4619 generic.go:334] "Generic (PLEG): container finished" podID="a956e8a2-fc10-4698-a40f-71503dbd4542" containerID="6b63b7c8cec551600b72d4e0b300058459ba94ecb71e1abc9c8b80444a8afca4" exitCode=0 Jan 26 10:58:40 crc kubenswrapper[4619]: I0126 10:58:40.301039 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jlwxk" event={"ID":"a956e8a2-fc10-4698-a40f-71503dbd4542","Type":"ContainerDied","Data":"6b63b7c8cec551600b72d4e0b300058459ba94ecb71e1abc9c8b80444a8afca4"} Jan 26 10:58:40 crc kubenswrapper[4619]: I0126 10:58:40.304564 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsqjj" event={"ID":"7a310950-4656-4955-b453-e846f29f47d8","Type":"ContainerStarted","Data":"6366c742031d44d86b86104126962062256e017a1312a3c1c6108b4f94b19f94"} Jan 26 10:58:40 crc kubenswrapper[4619]: I0126 10:58:40.307367 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvhzw" event={"ID":"3d6d6055-6441-4f47-8107-8886901691cc","Type":"ContainerStarted","Data":"7d80ba93fa7dfba3ed6351b2d1b60d5484ea2c7bab0a21b8e508cc8184092f47"} Jan 26 10:58:40 crc kubenswrapper[4619]: I0126 10:58:40.357631 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tvhzw" podStartSLOduration=4.260286018 podStartE2EDuration="1m1.357601433s" podCreationTimestamp="2026-01-26 10:57:39 +0000 UTC" firstStartedPulling="2026-01-26 10:57:41.688571088 +0000 UTC m=+160.722611804" lastFinishedPulling="2026-01-26 10:58:38.785886503 +0000 UTC m=+217.819927219" observedRunningTime="2026-01-26 10:58:40.356774131 +0000 UTC m=+219.390814847" watchObservedRunningTime="2026-01-26 10:58:40.357601433 +0000 UTC m=+219.391642149" Jan 26 10:58:41 crc kubenswrapper[4619]: I0126 10:58:41.112382 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7mntj" Jan 26 10:58:41 crc kubenswrapper[4619]: I0126 10:58:41.112704 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7mntj" Jan 26 10:58:41 crc kubenswrapper[4619]: I0126 10:58:41.315461 4619 generic.go:334] "Generic (PLEG): container finished" podID="ac07b4ac-4523-451f-91c0-9c4754786fce" containerID="ae5efce1eb376a123dfe1dd525ff379cbc2b70798423046f447b36515ea5be5f" exitCode=0 Jan 26 10:58:41 crc kubenswrapper[4619]: I0126 10:58:41.315510 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krm7h" event={"ID":"ac07b4ac-4523-451f-91c0-9c4754786fce","Type":"ContainerDied","Data":"ae5efce1eb376a123dfe1dd525ff379cbc2b70798423046f447b36515ea5be5f"} Jan 26 10:58:41 crc kubenswrapper[4619]: I0126 10:58:41.341302 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bsqjj" podStartSLOduration=6.104201603 podStartE2EDuration="1m4.341280799s" podCreationTimestamp="2026-01-26 10:57:37 +0000 UTC" firstStartedPulling="2026-01-26 10:57:40.60381271 +0000 UTC m=+159.637853426" lastFinishedPulling="2026-01-26 10:58:38.840891906 +0000 UTC m=+217.874932622" observedRunningTime="2026-01-26 10:58:40.378655225 +0000 UTC m=+219.412695941" watchObservedRunningTime="2026-01-26 10:58:41.341280799 +0000 UTC m=+220.375321515" Jan 26 10:58:41 crc kubenswrapper[4619]: I0126 10:58:41.538074 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wknfd" Jan 26 10:58:41 crc kubenswrapper[4619]: I0126 10:58:41.538647 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wknfd" Jan 26 10:58:42 crc kubenswrapper[4619]: I0126 10:58:42.147162 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7mntj" podUID="74eaac61-fd26-4596-ab74-d1282c0baf2b" containerName="registry-server" probeResult="failure" output=< Jan 26 10:58:42 crc kubenswrapper[4619]: timeout: failed to connect service ":50051" within 1s Jan 26 10:58:42 crc kubenswrapper[4619]: > Jan 26 10:58:42 crc kubenswrapper[4619]: I0126 10:58:42.323774 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jlwxk" event={"ID":"a956e8a2-fc10-4698-a40f-71503dbd4542","Type":"ContainerStarted","Data":"0d177a8c65c001a225438694f97bea63f4643990c48e716cfd30983957f0759b"} Jan 26 10:58:42 crc kubenswrapper[4619]: I0126 10:58:42.577523 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wknfd" podUID="c8f86935-b38f-41f5-a236-0d09213a5077" containerName="registry-server" probeResult="failure" output=< Jan 26 10:58:42 crc kubenswrapper[4619]: timeout: failed to connect service ":50051" within 1s Jan 26 10:58:42 crc kubenswrapper[4619]: > Jan 26 10:58:43 crc kubenswrapper[4619]: I0126 10:58:43.364822 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jlwxk" podStartSLOduration=4.45281911 podStartE2EDuration="1m5.364795085s" podCreationTimestamp="2026-01-26 10:57:38 +0000 UTC" firstStartedPulling="2026-01-26 10:57:40.54479646 +0000 UTC m=+159.578837176" lastFinishedPulling="2026-01-26 10:58:41.456772435 +0000 UTC m=+220.490813151" observedRunningTime="2026-01-26 10:58:43.360963181 +0000 UTC m=+222.395003917" watchObservedRunningTime="2026-01-26 10:58:43.364795085 +0000 UTC m=+222.398835821" Jan 26 10:58:44 crc kubenswrapper[4619]: I0126 10:58:44.234872 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 10:58:44 crc kubenswrapper[4619]: I0126 10:58:44.234962 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 10:58:44 crc kubenswrapper[4619]: I0126 10:58:44.235040 4619 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" Jan 26 10:58:44 crc kubenswrapper[4619]: I0126 10:58:44.235793 4619 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd"} pod="openshift-machine-config-operator/machine-config-daemon-28hd4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 10:58:44 crc kubenswrapper[4619]: I0126 10:58:44.235909 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" containerID="cri-o://955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd" gracePeriod=600 Jan 26 10:58:44 crc kubenswrapper[4619]: I0126 10:58:44.338127 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krm7h" event={"ID":"ac07b4ac-4523-451f-91c0-9c4754786fce","Type":"ContainerStarted","Data":"aefa826f1ffdf507290ea7921e5101e6e8feded13bbba4284145f87253bc7cda"} Jan 26 10:58:44 crc kubenswrapper[4619]: I0126 10:58:44.359179 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-krm7h" podStartSLOduration=4.026227156 podStartE2EDuration="1m4.359152091s" podCreationTimestamp="2026-01-26 10:57:40 +0000 UTC" firstStartedPulling="2026-01-26 10:57:42.948274141 +0000 UTC m=+161.982314857" lastFinishedPulling="2026-01-26 10:58:43.281199056 +0000 UTC m=+222.315239792" observedRunningTime="2026-01-26 10:58:44.355233365 +0000 UTC m=+223.389274071" watchObservedRunningTime="2026-01-26 10:58:44.359152091 +0000 UTC m=+223.393192807" Jan 26 10:58:46 crc kubenswrapper[4619]: I0126 10:58:46.364258 4619 generic.go:334] "Generic (PLEG): container finished" podID="f33a41bb-6406-4c73-8024-4acd72817832" containerID="955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd" exitCode=0 Jan 26 10:58:46 crc kubenswrapper[4619]: I0126 10:58:46.364300 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerDied","Data":"955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd"} Jan 26 10:58:47 crc kubenswrapper[4619]: I0126 10:58:47.971727 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bsqjj" Jan 26 10:58:47 crc kubenswrapper[4619]: I0126 10:58:47.972088 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bsqjj" Jan 26 10:58:48 crc kubenswrapper[4619]: I0126 10:58:48.042269 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bsqjj" Jan 26 10:58:48 crc kubenswrapper[4619]: I0126 10:58:48.377908 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerStarted","Data":"cadde2c282e632013097b122d7a86397094a5e7b66ec355994b21f7cc038c588"} Jan 26 10:58:48 crc kubenswrapper[4619]: I0126 10:58:48.426889 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bsqjj" Jan 26 10:58:48 crc kubenswrapper[4619]: I0126 10:58:48.506371 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jlwxk" Jan 26 10:58:48 crc kubenswrapper[4619]: I0126 10:58:48.506430 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jlwxk" Jan 26 10:58:48 crc kubenswrapper[4619]: I0126 10:58:48.542836 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jlwxk" Jan 26 10:58:48 crc kubenswrapper[4619]: I0126 10:58:48.713525 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bf7nn" Jan 26 10:58:48 crc kubenswrapper[4619]: I0126 10:58:48.757235 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bf7nn" Jan 26 10:58:49 crc kubenswrapper[4619]: I0126 10:58:49.442438 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jlwxk" Jan 26 10:58:50 crc kubenswrapper[4619]: I0126 10:58:50.102206 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tvhzw" Jan 26 10:58:50 crc kubenswrapper[4619]: I0126 10:58:50.102731 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tvhzw" Jan 26 10:58:50 crc kubenswrapper[4619]: I0126 10:58:50.146535 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tvhzw" Jan 26 10:58:50 crc kubenswrapper[4619]: I0126 10:58:50.309463 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bf7nn"] Jan 26 10:58:50 crc kubenswrapper[4619]: I0126 10:58:50.389583 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bf7nn" podUID="b13c3450-3323-4a93-912f-e727bf9e75f3" containerName="registry-server" containerID="cri-o://d46c5fe42f375a0964ceaad25dd7a2c549afd0bb3ce4b382b47324942283a894" gracePeriod=2 Jan 26 10:58:50 crc kubenswrapper[4619]: I0126 10:58:50.434665 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tvhzw" Jan 26 10:58:50 crc kubenswrapper[4619]: I0126 10:58:50.511887 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-krm7h" Jan 26 10:58:50 crc kubenswrapper[4619]: I0126 10:58:50.513075 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-krm7h" Jan 26 10:58:50 crc kubenswrapper[4619]: I0126 10:58:50.556891 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-krm7h" Jan 26 10:58:50 crc kubenswrapper[4619]: I0126 10:58:50.745490 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bf7nn" Jan 26 10:58:50 crc kubenswrapper[4619]: I0126 10:58:50.917347 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b13c3450-3323-4a93-912f-e727bf9e75f3-catalog-content\") pod \"b13c3450-3323-4a93-912f-e727bf9e75f3\" (UID: \"b13c3450-3323-4a93-912f-e727bf9e75f3\") " Jan 26 10:58:50 crc kubenswrapper[4619]: I0126 10:58:50.917422 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfxc5\" (UniqueName: \"kubernetes.io/projected/b13c3450-3323-4a93-912f-e727bf9e75f3-kube-api-access-pfxc5\") pod \"b13c3450-3323-4a93-912f-e727bf9e75f3\" (UID: \"b13c3450-3323-4a93-912f-e727bf9e75f3\") " Jan 26 10:58:50 crc kubenswrapper[4619]: I0126 10:58:50.917454 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b13c3450-3323-4a93-912f-e727bf9e75f3-utilities\") pod \"b13c3450-3323-4a93-912f-e727bf9e75f3\" (UID: \"b13c3450-3323-4a93-912f-e727bf9e75f3\") " Jan 26 10:58:50 crc kubenswrapper[4619]: I0126 10:58:50.918761 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b13c3450-3323-4a93-912f-e727bf9e75f3-utilities" (OuterVolumeSpecName: "utilities") pod "b13c3450-3323-4a93-912f-e727bf9e75f3" (UID: "b13c3450-3323-4a93-912f-e727bf9e75f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:58:50 crc kubenswrapper[4619]: I0126 10:58:50.975139 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b13c3450-3323-4a93-912f-e727bf9e75f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b13c3450-3323-4a93-912f-e727bf9e75f3" (UID: "b13c3450-3323-4a93-912f-e727bf9e75f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.152227 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b13c3450-3323-4a93-912f-e727bf9e75f3-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.152261 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b13c3450-3323-4a93-912f-e727bf9e75f3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.385661 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b13c3450-3323-4a93-912f-e727bf9e75f3-kube-api-access-pfxc5" (OuterVolumeSpecName: "kube-api-access-pfxc5") pod "b13c3450-3323-4a93-912f-e727bf9e75f3" (UID: "b13c3450-3323-4a93-912f-e727bf9e75f3"). InnerVolumeSpecName "kube-api-access-pfxc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.412176 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jlwxk"] Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.412795 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bf7nn" event={"ID":"b13c3450-3323-4a93-912f-e727bf9e75f3","Type":"ContainerDied","Data":"d46c5fe42f375a0964ceaad25dd7a2c549afd0bb3ce4b382b47324942283a894"} Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.412848 4619 scope.go:117] "RemoveContainer" containerID="d46c5fe42f375a0964ceaad25dd7a2c549afd0bb3ce4b382b47324942283a894" Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.412258 4619 generic.go:334] "Generic (PLEG): container finished" podID="b13c3450-3323-4a93-912f-e727bf9e75f3" containerID="d46c5fe42f375a0964ceaad25dd7a2c549afd0bb3ce4b382b47324942283a894" exitCode=0 Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.413913 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jlwxk" podUID="a956e8a2-fc10-4698-a40f-71503dbd4542" containerName="registry-server" containerID="cri-o://0d177a8c65c001a225438694f97bea63f4643990c48e716cfd30983957f0759b" gracePeriod=2 Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.412340 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bf7nn" Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.416568 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bf7nn" event={"ID":"b13c3450-3323-4a93-912f-e727bf9e75f3","Type":"ContainerDied","Data":"84373ef920155313ec97956d0417c12205e11a54496de0c7bd2b4f482184f0c4"} Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.420791 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7mntj" Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.441193 4619 scope.go:117] "RemoveContainer" containerID="8b56f5076865c72f3392a9f68047ef9d45b54eedcf7d476b2696ed18f371c880" Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.455254 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfxc5\" (UniqueName: \"kubernetes.io/projected/b13c3450-3323-4a93-912f-e727bf9e75f3-kube-api-access-pfxc5\") on node \"crc\" DevicePath \"\"" Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.458110 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-krm7h" Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.464169 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7mntj" Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.468604 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bf7nn"] Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.474941 4619 scope.go:117] "RemoveContainer" containerID="301c7aafdde3152e5cb030f2059744f14ce4d6fa1ca9ef8893cd524d608f88a6" Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.485534 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bf7nn"] Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.558803 4619 scope.go:117] "RemoveContainer" containerID="d46c5fe42f375a0964ceaad25dd7a2c549afd0bb3ce4b382b47324942283a894" Jan 26 10:58:51 crc kubenswrapper[4619]: E0126 10:58:51.559397 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d46c5fe42f375a0964ceaad25dd7a2c549afd0bb3ce4b382b47324942283a894\": container with ID starting with d46c5fe42f375a0964ceaad25dd7a2c549afd0bb3ce4b382b47324942283a894 not found: ID does not exist" containerID="d46c5fe42f375a0964ceaad25dd7a2c549afd0bb3ce4b382b47324942283a894" Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.559429 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d46c5fe42f375a0964ceaad25dd7a2c549afd0bb3ce4b382b47324942283a894"} err="failed to get container status \"d46c5fe42f375a0964ceaad25dd7a2c549afd0bb3ce4b382b47324942283a894\": rpc error: code = NotFound desc = could not find container \"d46c5fe42f375a0964ceaad25dd7a2c549afd0bb3ce4b382b47324942283a894\": container with ID starting with d46c5fe42f375a0964ceaad25dd7a2c549afd0bb3ce4b382b47324942283a894 not found: ID does not exist" Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.559472 4619 scope.go:117] "RemoveContainer" containerID="8b56f5076865c72f3392a9f68047ef9d45b54eedcf7d476b2696ed18f371c880" Jan 26 10:58:51 crc kubenswrapper[4619]: E0126 10:58:51.559830 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b56f5076865c72f3392a9f68047ef9d45b54eedcf7d476b2696ed18f371c880\": container with ID starting with 8b56f5076865c72f3392a9f68047ef9d45b54eedcf7d476b2696ed18f371c880 not found: ID does not exist" containerID="8b56f5076865c72f3392a9f68047ef9d45b54eedcf7d476b2696ed18f371c880" Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.559856 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b56f5076865c72f3392a9f68047ef9d45b54eedcf7d476b2696ed18f371c880"} err="failed to get container status \"8b56f5076865c72f3392a9f68047ef9d45b54eedcf7d476b2696ed18f371c880\": rpc error: code = NotFound desc = could not find container \"8b56f5076865c72f3392a9f68047ef9d45b54eedcf7d476b2696ed18f371c880\": container with ID starting with 8b56f5076865c72f3392a9f68047ef9d45b54eedcf7d476b2696ed18f371c880 not found: ID does not exist" Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.559875 4619 scope.go:117] "RemoveContainer" containerID="301c7aafdde3152e5cb030f2059744f14ce4d6fa1ca9ef8893cd524d608f88a6" Jan 26 10:58:51 crc kubenswrapper[4619]: E0126 10:58:51.560081 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"301c7aafdde3152e5cb030f2059744f14ce4d6fa1ca9ef8893cd524d608f88a6\": container with ID starting with 301c7aafdde3152e5cb030f2059744f14ce4d6fa1ca9ef8893cd524d608f88a6 not found: ID does not exist" containerID="301c7aafdde3152e5cb030f2059744f14ce4d6fa1ca9ef8893cd524d608f88a6" Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.560101 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"301c7aafdde3152e5cb030f2059744f14ce4d6fa1ca9ef8893cd524d608f88a6"} err="failed to get container status \"301c7aafdde3152e5cb030f2059744f14ce4d6fa1ca9ef8893cd524d608f88a6\": rpc error: code = NotFound desc = could not find container \"301c7aafdde3152e5cb030f2059744f14ce4d6fa1ca9ef8893cd524d608f88a6\": container with ID starting with 301c7aafdde3152e5cb030f2059744f14ce4d6fa1ca9ef8893cd524d608f88a6 not found: ID does not exist" Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.597093 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wknfd" Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.642265 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wknfd" Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.836133 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jlwxk" Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.960416 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a956e8a2-fc10-4698-a40f-71503dbd4542-utilities\") pod \"a956e8a2-fc10-4698-a40f-71503dbd4542\" (UID: \"a956e8a2-fc10-4698-a40f-71503dbd4542\") " Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.960519 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a956e8a2-fc10-4698-a40f-71503dbd4542-catalog-content\") pod \"a956e8a2-fc10-4698-a40f-71503dbd4542\" (UID: \"a956e8a2-fc10-4698-a40f-71503dbd4542\") " Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.960544 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6j8xw\" (UniqueName: \"kubernetes.io/projected/a956e8a2-fc10-4698-a40f-71503dbd4542-kube-api-access-6j8xw\") pod \"a956e8a2-fc10-4698-a40f-71503dbd4542\" (UID: \"a956e8a2-fc10-4698-a40f-71503dbd4542\") " Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.961630 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a956e8a2-fc10-4698-a40f-71503dbd4542-utilities" (OuterVolumeSpecName: "utilities") pod "a956e8a2-fc10-4698-a40f-71503dbd4542" (UID: "a956e8a2-fc10-4698-a40f-71503dbd4542"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:58:51 crc kubenswrapper[4619]: I0126 10:58:51.964584 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a956e8a2-fc10-4698-a40f-71503dbd4542-kube-api-access-6j8xw" (OuterVolumeSpecName: "kube-api-access-6j8xw") pod "a956e8a2-fc10-4698-a40f-71503dbd4542" (UID: "a956e8a2-fc10-4698-a40f-71503dbd4542"). InnerVolumeSpecName "kube-api-access-6j8xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:58:52 crc kubenswrapper[4619]: I0126 10:58:52.017643 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a956e8a2-fc10-4698-a40f-71503dbd4542-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a956e8a2-fc10-4698-a40f-71503dbd4542" (UID: "a956e8a2-fc10-4698-a40f-71503dbd4542"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:58:52 crc kubenswrapper[4619]: I0126 10:58:52.061544 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a956e8a2-fc10-4698-a40f-71503dbd4542-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 10:58:52 crc kubenswrapper[4619]: I0126 10:58:52.061924 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a956e8a2-fc10-4698-a40f-71503dbd4542-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 10:58:52 crc kubenswrapper[4619]: I0126 10:58:52.062043 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6j8xw\" (UniqueName: \"kubernetes.io/projected/a956e8a2-fc10-4698-a40f-71503dbd4542-kube-api-access-6j8xw\") on node \"crc\" DevicePath \"\"" Jan 26 10:58:52 crc kubenswrapper[4619]: I0126 10:58:52.419283 4619 generic.go:334] "Generic (PLEG): container finished" podID="a956e8a2-fc10-4698-a40f-71503dbd4542" containerID="0d177a8c65c001a225438694f97bea63f4643990c48e716cfd30983957f0759b" exitCode=0 Jan 26 10:58:52 crc kubenswrapper[4619]: I0126 10:58:52.419374 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jlwxk" Jan 26 10:58:52 crc kubenswrapper[4619]: I0126 10:58:52.419385 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jlwxk" event={"ID":"a956e8a2-fc10-4698-a40f-71503dbd4542","Type":"ContainerDied","Data":"0d177a8c65c001a225438694f97bea63f4643990c48e716cfd30983957f0759b"} Jan 26 10:58:52 crc kubenswrapper[4619]: I0126 10:58:52.419700 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jlwxk" event={"ID":"a956e8a2-fc10-4698-a40f-71503dbd4542","Type":"ContainerDied","Data":"ccccd02fb5f3225e2331cdce02d5ca946ef3bc95e3b8f6eb58feb40651983757"} Jan 26 10:58:52 crc kubenswrapper[4619]: I0126 10:58:52.419735 4619 scope.go:117] "RemoveContainer" containerID="0d177a8c65c001a225438694f97bea63f4643990c48e716cfd30983957f0759b" Jan 26 10:58:52 crc kubenswrapper[4619]: I0126 10:58:52.434227 4619 scope.go:117] "RemoveContainer" containerID="6b63b7c8cec551600b72d4e0b300058459ba94ecb71e1abc9c8b80444a8afca4" Jan 26 10:58:52 crc kubenswrapper[4619]: I0126 10:58:52.447889 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jlwxk"] Jan 26 10:58:52 crc kubenswrapper[4619]: I0126 10:58:52.449324 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jlwxk"] Jan 26 10:58:52 crc kubenswrapper[4619]: I0126 10:58:52.453533 4619 scope.go:117] "RemoveContainer" containerID="d502c5aa21d83d24d0cd02f14bb0b47e1f191d2f4c4f643fe2a38a1d2af867bb" Jan 26 10:58:52 crc kubenswrapper[4619]: I0126 10:58:52.469005 4619 scope.go:117] "RemoveContainer" containerID="0d177a8c65c001a225438694f97bea63f4643990c48e716cfd30983957f0759b" Jan 26 10:58:52 crc kubenswrapper[4619]: E0126 10:58:52.469519 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d177a8c65c001a225438694f97bea63f4643990c48e716cfd30983957f0759b\": container with ID starting with 0d177a8c65c001a225438694f97bea63f4643990c48e716cfd30983957f0759b not found: ID does not exist" containerID="0d177a8c65c001a225438694f97bea63f4643990c48e716cfd30983957f0759b" Jan 26 10:58:52 crc kubenswrapper[4619]: I0126 10:58:52.469654 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d177a8c65c001a225438694f97bea63f4643990c48e716cfd30983957f0759b"} err="failed to get container status \"0d177a8c65c001a225438694f97bea63f4643990c48e716cfd30983957f0759b\": rpc error: code = NotFound desc = could not find container \"0d177a8c65c001a225438694f97bea63f4643990c48e716cfd30983957f0759b\": container with ID starting with 0d177a8c65c001a225438694f97bea63f4643990c48e716cfd30983957f0759b not found: ID does not exist" Jan 26 10:58:52 crc kubenswrapper[4619]: I0126 10:58:52.469750 4619 scope.go:117] "RemoveContainer" containerID="6b63b7c8cec551600b72d4e0b300058459ba94ecb71e1abc9c8b80444a8afca4" Jan 26 10:58:52 crc kubenswrapper[4619]: E0126 10:58:52.470191 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b63b7c8cec551600b72d4e0b300058459ba94ecb71e1abc9c8b80444a8afca4\": container with ID starting with 6b63b7c8cec551600b72d4e0b300058459ba94ecb71e1abc9c8b80444a8afca4 not found: ID does not exist" containerID="6b63b7c8cec551600b72d4e0b300058459ba94ecb71e1abc9c8b80444a8afca4" Jan 26 10:58:52 crc kubenswrapper[4619]: I0126 10:58:52.470245 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b63b7c8cec551600b72d4e0b300058459ba94ecb71e1abc9c8b80444a8afca4"} err="failed to get container status \"6b63b7c8cec551600b72d4e0b300058459ba94ecb71e1abc9c8b80444a8afca4\": rpc error: code = NotFound desc = could not find container \"6b63b7c8cec551600b72d4e0b300058459ba94ecb71e1abc9c8b80444a8afca4\": container with ID starting with 6b63b7c8cec551600b72d4e0b300058459ba94ecb71e1abc9c8b80444a8afca4 not found: ID does not exist" Jan 26 10:58:52 crc kubenswrapper[4619]: I0126 10:58:52.470269 4619 scope.go:117] "RemoveContainer" containerID="d502c5aa21d83d24d0cd02f14bb0b47e1f191d2f4c4f643fe2a38a1d2af867bb" Jan 26 10:58:52 crc kubenswrapper[4619]: E0126 10:58:52.470585 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d502c5aa21d83d24d0cd02f14bb0b47e1f191d2f4c4f643fe2a38a1d2af867bb\": container with ID starting with d502c5aa21d83d24d0cd02f14bb0b47e1f191d2f4c4f643fe2a38a1d2af867bb not found: ID does not exist" containerID="d502c5aa21d83d24d0cd02f14bb0b47e1f191d2f4c4f643fe2a38a1d2af867bb" Jan 26 10:58:52 crc kubenswrapper[4619]: I0126 10:58:52.470660 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d502c5aa21d83d24d0cd02f14bb0b47e1f191d2f4c4f643fe2a38a1d2af867bb"} err="failed to get container status \"d502c5aa21d83d24d0cd02f14bb0b47e1f191d2f4c4f643fe2a38a1d2af867bb\": rpc error: code = NotFound desc = could not find container \"d502c5aa21d83d24d0cd02f14bb0b47e1f191d2f4c4f643fe2a38a1d2af867bb\": container with ID starting with d502c5aa21d83d24d0cd02f14bb0b47e1f191d2f4c4f643fe2a38a1d2af867bb not found: ID does not exist" Jan 26 10:58:52 crc kubenswrapper[4619]: I0126 10:58:52.509310 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-krm7h"] Jan 26 10:58:53 crc kubenswrapper[4619]: I0126 10:58:53.268387 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a956e8a2-fc10-4698-a40f-71503dbd4542" path="/var/lib/kubelet/pods/a956e8a2-fc10-4698-a40f-71503dbd4542/volumes" Jan 26 10:58:53 crc kubenswrapper[4619]: I0126 10:58:53.269441 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b13c3450-3323-4a93-912f-e727bf9e75f3" path="/var/lib/kubelet/pods/b13c3450-3323-4a93-912f-e727bf9e75f3/volumes" Jan 26 10:58:54 crc kubenswrapper[4619]: I0126 10:58:54.431066 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-krm7h" podUID="ac07b4ac-4523-451f-91c0-9c4754786fce" containerName="registry-server" containerID="cri-o://aefa826f1ffdf507290ea7921e5101e6e8feded13bbba4284145f87253bc7cda" gracePeriod=2 Jan 26 10:58:54 crc kubenswrapper[4619]: I0126 10:58:54.842195 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-krm7h" Jan 26 10:58:54 crc kubenswrapper[4619]: I0126 10:58:54.997145 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb2j6\" (UniqueName: \"kubernetes.io/projected/ac07b4ac-4523-451f-91c0-9c4754786fce-kube-api-access-xb2j6\") pod \"ac07b4ac-4523-451f-91c0-9c4754786fce\" (UID: \"ac07b4ac-4523-451f-91c0-9c4754786fce\") " Jan 26 10:58:54 crc kubenswrapper[4619]: I0126 10:58:54.997233 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac07b4ac-4523-451f-91c0-9c4754786fce-catalog-content\") pod \"ac07b4ac-4523-451f-91c0-9c4754786fce\" (UID: \"ac07b4ac-4523-451f-91c0-9c4754786fce\") " Jan 26 10:58:54 crc kubenswrapper[4619]: I0126 10:58:54.997264 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac07b4ac-4523-451f-91c0-9c4754786fce-utilities\") pod \"ac07b4ac-4523-451f-91c0-9c4754786fce\" (UID: \"ac07b4ac-4523-451f-91c0-9c4754786fce\") " Jan 26 10:58:54 crc kubenswrapper[4619]: I0126 10:58:54.998112 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac07b4ac-4523-451f-91c0-9c4754786fce-utilities" (OuterVolumeSpecName: "utilities") pod "ac07b4ac-4523-451f-91c0-9c4754786fce" (UID: "ac07b4ac-4523-451f-91c0-9c4754786fce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:58:54 crc kubenswrapper[4619]: I0126 10:58:54.998283 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac07b4ac-4523-451f-91c0-9c4754786fce-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 10:58:55 crc kubenswrapper[4619]: I0126 10:58:55.002917 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac07b4ac-4523-451f-91c0-9c4754786fce-kube-api-access-xb2j6" (OuterVolumeSpecName: "kube-api-access-xb2j6") pod "ac07b4ac-4523-451f-91c0-9c4754786fce" (UID: "ac07b4ac-4523-451f-91c0-9c4754786fce"). InnerVolumeSpecName "kube-api-access-xb2j6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:58:55 crc kubenswrapper[4619]: I0126 10:58:55.024941 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac07b4ac-4523-451f-91c0-9c4754786fce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac07b4ac-4523-451f-91c0-9c4754786fce" (UID: "ac07b4ac-4523-451f-91c0-9c4754786fce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:58:55 crc kubenswrapper[4619]: I0126 10:58:55.100222 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb2j6\" (UniqueName: \"kubernetes.io/projected/ac07b4ac-4523-451f-91c0-9c4754786fce-kube-api-access-xb2j6\") on node \"crc\" DevicePath \"\"" Jan 26 10:58:55 crc kubenswrapper[4619]: I0126 10:58:55.100267 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac07b4ac-4523-451f-91c0-9c4754786fce-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 10:58:55 crc kubenswrapper[4619]: I0126 10:58:55.442652 4619 generic.go:334] "Generic (PLEG): container finished" podID="ac07b4ac-4523-451f-91c0-9c4754786fce" containerID="aefa826f1ffdf507290ea7921e5101e6e8feded13bbba4284145f87253bc7cda" exitCode=0 Jan 26 10:58:55 crc kubenswrapper[4619]: I0126 10:58:55.442862 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krm7h" event={"ID":"ac07b4ac-4523-451f-91c0-9c4754786fce","Type":"ContainerDied","Data":"aefa826f1ffdf507290ea7921e5101e6e8feded13bbba4284145f87253bc7cda"} Jan 26 10:58:55 crc kubenswrapper[4619]: I0126 10:58:55.443230 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krm7h" event={"ID":"ac07b4ac-4523-451f-91c0-9c4754786fce","Type":"ContainerDied","Data":"cf80e00b5df6715f99300d45f90b71834d46db29ce3230a2154b855d102980c9"} Jan 26 10:58:55 crc kubenswrapper[4619]: I0126 10:58:55.443262 4619 scope.go:117] "RemoveContainer" containerID="aefa826f1ffdf507290ea7921e5101e6e8feded13bbba4284145f87253bc7cda" Jan 26 10:58:55 crc kubenswrapper[4619]: I0126 10:58:55.442963 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-krm7h" Jan 26 10:58:55 crc kubenswrapper[4619]: I0126 10:58:55.466957 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-krm7h"] Jan 26 10:58:55 crc kubenswrapper[4619]: I0126 10:58:55.469506 4619 scope.go:117] "RemoveContainer" containerID="ae5efce1eb376a123dfe1dd525ff379cbc2b70798423046f447b36515ea5be5f" Jan 26 10:58:55 crc kubenswrapper[4619]: I0126 10:58:55.470264 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-krm7h"] Jan 26 10:58:55 crc kubenswrapper[4619]: I0126 10:58:55.551147 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wknfd"] Jan 26 10:58:55 crc kubenswrapper[4619]: I0126 10:58:55.551422 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wknfd" podUID="c8f86935-b38f-41f5-a236-0d09213a5077" containerName="registry-server" containerID="cri-o://b3b86d69cd1710858e41f02dfca848a5c1311bc07d1a3c60a7a61b8be869501d" gracePeriod=2 Jan 26 10:58:55 crc kubenswrapper[4619]: I0126 10:58:55.552854 4619 scope.go:117] "RemoveContainer" containerID="0aafe1986c9cafee8703646bb92c90d3b956a2692b5cceb00f70592326b27ea0" Jan 26 10:58:55 crc kubenswrapper[4619]: I0126 10:58:55.567566 4619 scope.go:117] "RemoveContainer" containerID="aefa826f1ffdf507290ea7921e5101e6e8feded13bbba4284145f87253bc7cda" Jan 26 10:58:55 crc kubenswrapper[4619]: E0126 10:58:55.569114 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aefa826f1ffdf507290ea7921e5101e6e8feded13bbba4284145f87253bc7cda\": container with ID starting with aefa826f1ffdf507290ea7921e5101e6e8feded13bbba4284145f87253bc7cda not found: ID does not exist" containerID="aefa826f1ffdf507290ea7921e5101e6e8feded13bbba4284145f87253bc7cda" Jan 26 10:58:55 crc kubenswrapper[4619]: I0126 10:58:55.569151 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aefa826f1ffdf507290ea7921e5101e6e8feded13bbba4284145f87253bc7cda"} err="failed to get container status \"aefa826f1ffdf507290ea7921e5101e6e8feded13bbba4284145f87253bc7cda\": rpc error: code = NotFound desc = could not find container \"aefa826f1ffdf507290ea7921e5101e6e8feded13bbba4284145f87253bc7cda\": container with ID starting with aefa826f1ffdf507290ea7921e5101e6e8feded13bbba4284145f87253bc7cda not found: ID does not exist" Jan 26 10:58:55 crc kubenswrapper[4619]: I0126 10:58:55.569178 4619 scope.go:117] "RemoveContainer" containerID="ae5efce1eb376a123dfe1dd525ff379cbc2b70798423046f447b36515ea5be5f" Jan 26 10:58:55 crc kubenswrapper[4619]: E0126 10:58:55.569468 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae5efce1eb376a123dfe1dd525ff379cbc2b70798423046f447b36515ea5be5f\": container with ID starting with ae5efce1eb376a123dfe1dd525ff379cbc2b70798423046f447b36515ea5be5f not found: ID does not exist" containerID="ae5efce1eb376a123dfe1dd525ff379cbc2b70798423046f447b36515ea5be5f" Jan 26 10:58:55 crc kubenswrapper[4619]: I0126 10:58:55.569502 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae5efce1eb376a123dfe1dd525ff379cbc2b70798423046f447b36515ea5be5f"} err="failed to get container status \"ae5efce1eb376a123dfe1dd525ff379cbc2b70798423046f447b36515ea5be5f\": rpc error: code = NotFound desc = could not find container \"ae5efce1eb376a123dfe1dd525ff379cbc2b70798423046f447b36515ea5be5f\": container with ID starting with ae5efce1eb376a123dfe1dd525ff379cbc2b70798423046f447b36515ea5be5f not found: ID does not exist" Jan 26 10:58:55 crc kubenswrapper[4619]: I0126 10:58:55.569517 4619 scope.go:117] "RemoveContainer" containerID="0aafe1986c9cafee8703646bb92c90d3b956a2692b5cceb00f70592326b27ea0" Jan 26 10:58:55 crc kubenswrapper[4619]: E0126 10:58:55.569724 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0aafe1986c9cafee8703646bb92c90d3b956a2692b5cceb00f70592326b27ea0\": container with ID starting with 0aafe1986c9cafee8703646bb92c90d3b956a2692b5cceb00f70592326b27ea0 not found: ID does not exist" containerID="0aafe1986c9cafee8703646bb92c90d3b956a2692b5cceb00f70592326b27ea0" Jan 26 10:58:55 crc kubenswrapper[4619]: I0126 10:58:55.569752 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0aafe1986c9cafee8703646bb92c90d3b956a2692b5cceb00f70592326b27ea0"} err="failed to get container status \"0aafe1986c9cafee8703646bb92c90d3b956a2692b5cceb00f70592326b27ea0\": rpc error: code = NotFound desc = could not find container \"0aafe1986c9cafee8703646bb92c90d3b956a2692b5cceb00f70592326b27ea0\": container with ID starting with 0aafe1986c9cafee8703646bb92c90d3b956a2692b5cceb00f70592326b27ea0 not found: ID does not exist" Jan 26 10:58:55 crc kubenswrapper[4619]: I0126 10:58:55.898219 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wknfd" Jan 26 10:58:55 crc kubenswrapper[4619]: I0126 10:58:55.946168 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8f86935-b38f-41f5-a236-0d09213a5077-catalog-content\") pod \"c8f86935-b38f-41f5-a236-0d09213a5077\" (UID: \"c8f86935-b38f-41f5-a236-0d09213a5077\") " Jan 26 10:58:55 crc kubenswrapper[4619]: I0126 10:58:55.946227 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8f86935-b38f-41f5-a236-0d09213a5077-utilities\") pod \"c8f86935-b38f-41f5-a236-0d09213a5077\" (UID: \"c8f86935-b38f-41f5-a236-0d09213a5077\") " Jan 26 10:58:55 crc kubenswrapper[4619]: I0126 10:58:55.946370 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86fcs\" (UniqueName: \"kubernetes.io/projected/c8f86935-b38f-41f5-a236-0d09213a5077-kube-api-access-86fcs\") pod \"c8f86935-b38f-41f5-a236-0d09213a5077\" (UID: \"c8f86935-b38f-41f5-a236-0d09213a5077\") " Jan 26 10:58:55 crc kubenswrapper[4619]: I0126 10:58:55.947223 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8f86935-b38f-41f5-a236-0d09213a5077-utilities" (OuterVolumeSpecName: "utilities") pod "c8f86935-b38f-41f5-a236-0d09213a5077" (UID: "c8f86935-b38f-41f5-a236-0d09213a5077"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:58:55 crc kubenswrapper[4619]: I0126 10:58:55.947607 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8f86935-b38f-41f5-a236-0d09213a5077-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 10:58:55 crc kubenswrapper[4619]: I0126 10:58:55.952105 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8f86935-b38f-41f5-a236-0d09213a5077-kube-api-access-86fcs" (OuterVolumeSpecName: "kube-api-access-86fcs") pod "c8f86935-b38f-41f5-a236-0d09213a5077" (UID: "c8f86935-b38f-41f5-a236-0d09213a5077"). InnerVolumeSpecName "kube-api-access-86fcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:58:56 crc kubenswrapper[4619]: I0126 10:58:56.048603 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86fcs\" (UniqueName: \"kubernetes.io/projected/c8f86935-b38f-41f5-a236-0d09213a5077-kube-api-access-86fcs\") on node \"crc\" DevicePath \"\"" Jan 26 10:58:56 crc kubenswrapper[4619]: I0126 10:58:56.088025 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8f86935-b38f-41f5-a236-0d09213a5077-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8f86935-b38f-41f5-a236-0d09213a5077" (UID: "c8f86935-b38f-41f5-a236-0d09213a5077"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 10:58:56 crc kubenswrapper[4619]: I0126 10:58:56.149180 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8f86935-b38f-41f5-a236-0d09213a5077-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 10:58:56 crc kubenswrapper[4619]: I0126 10:58:56.455146 4619 generic.go:334] "Generic (PLEG): container finished" podID="c8f86935-b38f-41f5-a236-0d09213a5077" containerID="b3b86d69cd1710858e41f02dfca848a5c1311bc07d1a3c60a7a61b8be869501d" exitCode=0 Jan 26 10:58:56 crc kubenswrapper[4619]: I0126 10:58:56.455215 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wknfd" event={"ID":"c8f86935-b38f-41f5-a236-0d09213a5077","Type":"ContainerDied","Data":"b3b86d69cd1710858e41f02dfca848a5c1311bc07d1a3c60a7a61b8be869501d"} Jan 26 10:58:56 crc kubenswrapper[4619]: I0126 10:58:56.455258 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wknfd" event={"ID":"c8f86935-b38f-41f5-a236-0d09213a5077","Type":"ContainerDied","Data":"c6c9b9f7c0cc510ac813bb40f83762fa7b10c896dbfdc9bf17d7b607b827a3a2"} Jan 26 10:58:56 crc kubenswrapper[4619]: I0126 10:58:56.455287 4619 scope.go:117] "RemoveContainer" containerID="b3b86d69cd1710858e41f02dfca848a5c1311bc07d1a3c60a7a61b8be869501d" Jan 26 10:58:56 crc kubenswrapper[4619]: I0126 10:58:56.455516 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wknfd" Jan 26 10:58:56 crc kubenswrapper[4619]: I0126 10:58:56.479852 4619 scope.go:117] "RemoveContainer" containerID="429eb4aded9d2971b4bb3ecf589bd72299f0e6ef9d258f04055efc3199aa537c" Jan 26 10:58:56 crc kubenswrapper[4619]: I0126 10:58:56.513501 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wknfd"] Jan 26 10:58:56 crc kubenswrapper[4619]: I0126 10:58:56.517841 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wknfd"] Jan 26 10:58:56 crc kubenswrapper[4619]: I0126 10:58:56.535506 4619 scope.go:117] "RemoveContainer" containerID="969c872876824c81d408254faa0c3cbb9ae80b7f9870a0f75fb50d9352e96f70" Jan 26 10:58:56 crc kubenswrapper[4619]: I0126 10:58:56.574537 4619 scope.go:117] "RemoveContainer" containerID="b3b86d69cd1710858e41f02dfca848a5c1311bc07d1a3c60a7a61b8be869501d" Jan 26 10:58:56 crc kubenswrapper[4619]: E0126 10:58:56.576272 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3b86d69cd1710858e41f02dfca848a5c1311bc07d1a3c60a7a61b8be869501d\": container with ID starting with b3b86d69cd1710858e41f02dfca848a5c1311bc07d1a3c60a7a61b8be869501d not found: ID does not exist" containerID="b3b86d69cd1710858e41f02dfca848a5c1311bc07d1a3c60a7a61b8be869501d" Jan 26 10:58:56 crc kubenswrapper[4619]: I0126 10:58:56.576354 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3b86d69cd1710858e41f02dfca848a5c1311bc07d1a3c60a7a61b8be869501d"} err="failed to get container status \"b3b86d69cd1710858e41f02dfca848a5c1311bc07d1a3c60a7a61b8be869501d\": rpc error: code = NotFound desc = could not find container \"b3b86d69cd1710858e41f02dfca848a5c1311bc07d1a3c60a7a61b8be869501d\": container with ID starting with b3b86d69cd1710858e41f02dfca848a5c1311bc07d1a3c60a7a61b8be869501d not found: ID does not exist" Jan 26 10:58:56 crc kubenswrapper[4619]: I0126 10:58:56.576400 4619 scope.go:117] "RemoveContainer" containerID="429eb4aded9d2971b4bb3ecf589bd72299f0e6ef9d258f04055efc3199aa537c" Jan 26 10:58:56 crc kubenswrapper[4619]: E0126 10:58:56.577251 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"429eb4aded9d2971b4bb3ecf589bd72299f0e6ef9d258f04055efc3199aa537c\": container with ID starting with 429eb4aded9d2971b4bb3ecf589bd72299f0e6ef9d258f04055efc3199aa537c not found: ID does not exist" containerID="429eb4aded9d2971b4bb3ecf589bd72299f0e6ef9d258f04055efc3199aa537c" Jan 26 10:58:56 crc kubenswrapper[4619]: I0126 10:58:56.577382 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"429eb4aded9d2971b4bb3ecf589bd72299f0e6ef9d258f04055efc3199aa537c"} err="failed to get container status \"429eb4aded9d2971b4bb3ecf589bd72299f0e6ef9d258f04055efc3199aa537c\": rpc error: code = NotFound desc = could not find container \"429eb4aded9d2971b4bb3ecf589bd72299f0e6ef9d258f04055efc3199aa537c\": container with ID starting with 429eb4aded9d2971b4bb3ecf589bd72299f0e6ef9d258f04055efc3199aa537c not found: ID does not exist" Jan 26 10:58:56 crc kubenswrapper[4619]: I0126 10:58:56.577401 4619 scope.go:117] "RemoveContainer" containerID="969c872876824c81d408254faa0c3cbb9ae80b7f9870a0f75fb50d9352e96f70" Jan 26 10:58:56 crc kubenswrapper[4619]: E0126 10:58:56.578485 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"969c872876824c81d408254faa0c3cbb9ae80b7f9870a0f75fb50d9352e96f70\": container with ID starting with 969c872876824c81d408254faa0c3cbb9ae80b7f9870a0f75fb50d9352e96f70 not found: ID does not exist" containerID="969c872876824c81d408254faa0c3cbb9ae80b7f9870a0f75fb50d9352e96f70" Jan 26 10:58:56 crc kubenswrapper[4619]: I0126 10:58:56.578529 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"969c872876824c81d408254faa0c3cbb9ae80b7f9870a0f75fb50d9352e96f70"} err="failed to get container status \"969c872876824c81d408254faa0c3cbb9ae80b7f9870a0f75fb50d9352e96f70\": rpc error: code = NotFound desc = could not find container \"969c872876824c81d408254faa0c3cbb9ae80b7f9870a0f75fb50d9352e96f70\": container with ID starting with 969c872876824c81d408254faa0c3cbb9ae80b7f9870a0f75fb50d9352e96f70 not found: ID does not exist" Jan 26 10:58:57 crc kubenswrapper[4619]: I0126 10:58:57.270405 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac07b4ac-4523-451f-91c0-9c4754786fce" path="/var/lib/kubelet/pods/ac07b4ac-4523-451f-91c0-9c4754786fce/volumes" Jan 26 10:58:57 crc kubenswrapper[4619]: I0126 10:58:57.271820 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8f86935-b38f-41f5-a236-0d09213a5077" path="/var/lib/kubelet/pods/c8f86935-b38f-41f5-a236-0d09213a5077/volumes" Jan 26 10:58:57 crc kubenswrapper[4619]: I0126 10:58:57.458444 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6jj4w"] Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.276417 4619 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.277284 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512" gracePeriod=15 Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.277382 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406" gracePeriod=15 Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.277348 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654" gracePeriod=15 Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.277455 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161" gracePeriod=15 Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.277344 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30" gracePeriod=15 Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.281919 4619 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 10:59:11 crc kubenswrapper[4619]: E0126 10:59:11.282159 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.282176 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 10:59:11 crc kubenswrapper[4619]: E0126 10:59:11.282184 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.282190 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 10:59:11 crc kubenswrapper[4619]: E0126 10:59:11.282198 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.282205 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 10:59:11 crc kubenswrapper[4619]: E0126 10:59:11.282212 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.282218 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 10:59:11 crc kubenswrapper[4619]: E0126 10:59:11.282226 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f86935-b38f-41f5-a236-0d09213a5077" containerName="extract-content" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.282232 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f86935-b38f-41f5-a236-0d09213a5077" containerName="extract-content" Jan 26 10:59:11 crc kubenswrapper[4619]: E0126 10:59:11.282243 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.282248 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 10:59:11 crc kubenswrapper[4619]: E0126 10:59:11.282254 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b13c3450-3323-4a93-912f-e727bf9e75f3" containerName="extract-utilities" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.282259 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="b13c3450-3323-4a93-912f-e727bf9e75f3" containerName="extract-utilities" Jan 26 10:59:11 crc kubenswrapper[4619]: E0126 10:59:11.282267 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f86935-b38f-41f5-a236-0d09213a5077" containerName="extract-utilities" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.282273 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f86935-b38f-41f5-a236-0d09213a5077" containerName="extract-utilities" Jan 26 10:59:11 crc kubenswrapper[4619]: E0126 10:59:11.282282 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac07b4ac-4523-451f-91c0-9c4754786fce" containerName="registry-server" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.282287 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac07b4ac-4523-451f-91c0-9c4754786fce" containerName="registry-server" Jan 26 10:59:11 crc kubenswrapper[4619]: E0126 10:59:11.282296 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac07b4ac-4523-451f-91c0-9c4754786fce" containerName="extract-content" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.282303 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac07b4ac-4523-451f-91c0-9c4754786fce" containerName="extract-content" Jan 26 10:59:11 crc kubenswrapper[4619]: E0126 10:59:11.282314 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f86935-b38f-41f5-a236-0d09213a5077" containerName="registry-server" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.282321 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f86935-b38f-41f5-a236-0d09213a5077" containerName="registry-server" Jan 26 10:59:11 crc kubenswrapper[4619]: E0126 10:59:11.282329 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a956e8a2-fc10-4698-a40f-71503dbd4542" containerName="extract-utilities" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.282334 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="a956e8a2-fc10-4698-a40f-71503dbd4542" containerName="extract-utilities" Jan 26 10:59:11 crc kubenswrapper[4619]: E0126 10:59:11.282341 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.282347 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 10:59:11 crc kubenswrapper[4619]: E0126 10:59:11.282354 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac07b4ac-4523-451f-91c0-9c4754786fce" containerName="extract-utilities" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.282359 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac07b4ac-4523-451f-91c0-9c4754786fce" containerName="extract-utilities" Jan 26 10:59:11 crc kubenswrapper[4619]: E0126 10:59:11.282368 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a956e8a2-fc10-4698-a40f-71503dbd4542" containerName="extract-content" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.282373 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="a956e8a2-fc10-4698-a40f-71503dbd4542" containerName="extract-content" Jan 26 10:59:11 crc kubenswrapper[4619]: E0126 10:59:11.282380 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b13c3450-3323-4a93-912f-e727bf9e75f3" containerName="extract-content" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.282385 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="b13c3450-3323-4a93-912f-e727bf9e75f3" containerName="extract-content" Jan 26 10:59:11 crc kubenswrapper[4619]: E0126 10:59:11.282394 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a956e8a2-fc10-4698-a40f-71503dbd4542" containerName="registry-server" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.282400 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="a956e8a2-fc10-4698-a40f-71503dbd4542" containerName="registry-server" Jan 26 10:59:11 crc kubenswrapper[4619]: E0126 10:59:11.282407 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b13c3450-3323-4a93-912f-e727bf9e75f3" containerName="registry-server" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.282413 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="b13c3450-3323-4a93-912f-e727bf9e75f3" containerName="registry-server" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.282493 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.282504 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.282512 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="a956e8a2-fc10-4698-a40f-71503dbd4542" containerName="registry-server" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.282520 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8f86935-b38f-41f5-a236-0d09213a5077" containerName="registry-server" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.282530 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac07b4ac-4523-451f-91c0-9c4754786fce" containerName="registry-server" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.282537 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.282545 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.282553 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="b13c3450-3323-4a93-912f-e727bf9e75f3" containerName="registry-server" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.282563 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 10:59:11 crc kubenswrapper[4619]: E0126 10:59:11.282667 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.282674 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.282774 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.284195 4619 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.284682 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.292211 4619 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 26 10:59:11 crc kubenswrapper[4619]: E0126 10:59:11.329201 4619 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.69:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.458382 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.458440 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.458497 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.458556 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.458580 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.458601 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.458701 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.458717 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.538544 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.539752 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.540350 4619 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30" exitCode=0 Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.540378 4619 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161" exitCode=0 Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.540388 4619 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654" exitCode=0 Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.540399 4619 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406" exitCode=2 Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.540470 4619 scope.go:117] "RemoveContainer" containerID="5efec0ca66f4821c16cbe135dc2f3c0798953a4666c313d92188a8cc3a71e7a8" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.542148 4619 generic.go:334] "Generic (PLEG): container finished" podID="ebe1934c-ce73-47cb-8246-ce6f47742100" containerID="f6ff6a8cebcc517d4e07ac9f5a229f45570eabe45c11712f6c6f09417a799eaa" exitCode=0 Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.542191 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ebe1934c-ce73-47cb-8246-ce6f47742100","Type":"ContainerDied","Data":"f6ff6a8cebcc517d4e07ac9f5a229f45570eabe45c11712f6c6f09417a799eaa"} Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.546840 4619 status_manager.go:851] "Failed to get status for pod" podUID="ebe1934c-ce73-47cb-8246-ce6f47742100" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.559527 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.559736 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.559608 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.559896 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.559992 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.560021 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.560035 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.560095 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.560142 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.560155 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.560182 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.560200 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.560209 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.560232 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.560260 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.560376 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:59:11 crc kubenswrapper[4619]: I0126 10:59:11.630197 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 10:59:11 crc kubenswrapper[4619]: E0126 10:59:11.651190 4619 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.69:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e42ccf4bfc6ec openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 10:59:11.65015422 +0000 UTC m=+250.684194936,LastTimestamp:2026-01-26 10:59:11.65015422 +0000 UTC m=+250.684194936,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 10:59:12 crc kubenswrapper[4619]: I0126 10:59:12.547375 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"5aa41818a55e10bc883950638bbf126ebf2925aa4b8fbe685ed139998df17277"} Jan 26 10:59:12 crc kubenswrapper[4619]: I0126 10:59:12.547761 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"3381810c2cb915ddb90ebb5e17fffaf4b667b221ef53a9b185fab530e9c7d1bc"} Jan 26 10:59:12 crc kubenswrapper[4619]: E0126 10:59:12.548392 4619 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.69:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 10:59:12 crc kubenswrapper[4619]: I0126 10:59:12.548544 4619 status_manager.go:851] "Failed to get status for pod" podUID="ebe1934c-ce73-47cb-8246-ce6f47742100" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:12 crc kubenswrapper[4619]: I0126 10:59:12.549714 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 10:59:12 crc kubenswrapper[4619]: I0126 10:59:12.806988 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 10:59:12 crc kubenswrapper[4619]: I0126 10:59:12.807738 4619 status_manager.go:851] "Failed to get status for pod" podUID="ebe1934c-ce73-47cb-8246-ce6f47742100" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:12 crc kubenswrapper[4619]: I0126 10:59:12.977009 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebe1934c-ce73-47cb-8246-ce6f47742100-var-lock\") pod \"ebe1934c-ce73-47cb-8246-ce6f47742100\" (UID: \"ebe1934c-ce73-47cb-8246-ce6f47742100\") " Jan 26 10:59:12 crc kubenswrapper[4619]: I0126 10:59:12.977129 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebe1934c-ce73-47cb-8246-ce6f47742100-kubelet-dir\") pod \"ebe1934c-ce73-47cb-8246-ce6f47742100\" (UID: \"ebe1934c-ce73-47cb-8246-ce6f47742100\") " Jan 26 10:59:12 crc kubenswrapper[4619]: I0126 10:59:12.977142 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebe1934c-ce73-47cb-8246-ce6f47742100-var-lock" (OuterVolumeSpecName: "var-lock") pod "ebe1934c-ce73-47cb-8246-ce6f47742100" (UID: "ebe1934c-ce73-47cb-8246-ce6f47742100"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 10:59:12 crc kubenswrapper[4619]: I0126 10:59:12.977186 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ebe1934c-ce73-47cb-8246-ce6f47742100-kube-api-access\") pod \"ebe1934c-ce73-47cb-8246-ce6f47742100\" (UID: \"ebe1934c-ce73-47cb-8246-ce6f47742100\") " Jan 26 10:59:12 crc kubenswrapper[4619]: I0126 10:59:12.977228 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebe1934c-ce73-47cb-8246-ce6f47742100-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ebe1934c-ce73-47cb-8246-ce6f47742100" (UID: "ebe1934c-ce73-47cb-8246-ce6f47742100"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 10:59:12 crc kubenswrapper[4619]: I0126 10:59:12.977459 4619 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebe1934c-ce73-47cb-8246-ce6f47742100-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 10:59:12 crc kubenswrapper[4619]: I0126 10:59:12.977494 4619 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebe1934c-ce73-47cb-8246-ce6f47742100-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 10:59:12 crc kubenswrapper[4619]: I0126 10:59:12.981419 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebe1934c-ce73-47cb-8246-ce6f47742100-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ebe1934c-ce73-47cb-8246-ce6f47742100" (UID: "ebe1934c-ce73-47cb-8246-ce6f47742100"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:59:13 crc kubenswrapper[4619]: I0126 10:59:13.078351 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ebe1934c-ce73-47cb-8246-ce6f47742100-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 10:59:13 crc kubenswrapper[4619]: I0126 10:59:13.556111 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ebe1934c-ce73-47cb-8246-ce6f47742100","Type":"ContainerDied","Data":"3b6caa316a8704f861f915bcf6746a265af41119da454dabe7349279d483b9a6"} Jan 26 10:59:13 crc kubenswrapper[4619]: I0126 10:59:13.556455 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b6caa316a8704f861f915bcf6746a265af41119da454dabe7349279d483b9a6" Jan 26 10:59:13 crc kubenswrapper[4619]: I0126 10:59:13.556156 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 10:59:13 crc kubenswrapper[4619]: I0126 10:59:13.560542 4619 status_manager.go:851] "Failed to get status for pod" podUID="ebe1934c-ce73-47cb-8246-ce6f47742100" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:13 crc kubenswrapper[4619]: I0126 10:59:13.743369 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 10:59:13 crc kubenswrapper[4619]: I0126 10:59:13.744058 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:59:13 crc kubenswrapper[4619]: I0126 10:59:13.744688 4619 status_manager.go:851] "Failed to get status for pod" podUID="ebe1934c-ce73-47cb-8246-ce6f47742100" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:13 crc kubenswrapper[4619]: I0126 10:59:13.745146 4619 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:13 crc kubenswrapper[4619]: I0126 10:59:13.789561 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 10:59:13 crc kubenswrapper[4619]: I0126 10:59:13.789642 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 10:59:13 crc kubenswrapper[4619]: I0126 10:59:13.789658 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 10:59:13 crc kubenswrapper[4619]: I0126 10:59:13.789717 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 10:59:13 crc kubenswrapper[4619]: I0126 10:59:13.789738 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 10:59:13 crc kubenswrapper[4619]: I0126 10:59:13.789769 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 10:59:13 crc kubenswrapper[4619]: I0126 10:59:13.789943 4619 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 10:59:13 crc kubenswrapper[4619]: I0126 10:59:13.789964 4619 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 10:59:13 crc kubenswrapper[4619]: I0126 10:59:13.789978 4619 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 26 10:59:14 crc kubenswrapper[4619]: I0126 10:59:14.566591 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 10:59:14 crc kubenswrapper[4619]: I0126 10:59:14.567517 4619 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512" exitCode=0 Jan 26 10:59:14 crc kubenswrapper[4619]: I0126 10:59:14.567567 4619 scope.go:117] "RemoveContainer" containerID="f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30" Jan 26 10:59:14 crc kubenswrapper[4619]: I0126 10:59:14.567643 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:59:14 crc kubenswrapper[4619]: I0126 10:59:14.584975 4619 status_manager.go:851] "Failed to get status for pod" podUID="ebe1934c-ce73-47cb-8246-ce6f47742100" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:14 crc kubenswrapper[4619]: I0126 10:59:14.585456 4619 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:14 crc kubenswrapper[4619]: I0126 10:59:14.588552 4619 scope.go:117] "RemoveContainer" containerID="c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161" Jan 26 10:59:14 crc kubenswrapper[4619]: I0126 10:59:14.602899 4619 scope.go:117] "RemoveContainer" containerID="f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654" Jan 26 10:59:14 crc kubenswrapper[4619]: I0126 10:59:14.617321 4619 scope.go:117] "RemoveContainer" containerID="63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406" Jan 26 10:59:14 crc kubenswrapper[4619]: I0126 10:59:14.630131 4619 scope.go:117] "RemoveContainer" containerID="c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512" Jan 26 10:59:14 crc kubenswrapper[4619]: I0126 10:59:14.652600 4619 scope.go:117] "RemoveContainer" containerID="882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47" Jan 26 10:59:14 crc kubenswrapper[4619]: I0126 10:59:14.678381 4619 scope.go:117] "RemoveContainer" containerID="f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30" Jan 26 10:59:14 crc kubenswrapper[4619]: E0126 10:59:14.678886 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\": container with ID starting with f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30 not found: ID does not exist" containerID="f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30" Jan 26 10:59:14 crc kubenswrapper[4619]: I0126 10:59:14.678920 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30"} err="failed to get container status \"f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\": rpc error: code = NotFound desc = could not find container \"f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30\": container with ID starting with f609a4c64a4a75632a1fa955ce350e0c47d34c55c7db2529c54735c05222ee30 not found: ID does not exist" Jan 26 10:59:14 crc kubenswrapper[4619]: I0126 10:59:14.678969 4619 scope.go:117] "RemoveContainer" containerID="c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161" Jan 26 10:59:14 crc kubenswrapper[4619]: E0126 10:59:14.679408 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\": container with ID starting with c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161 not found: ID does not exist" containerID="c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161" Jan 26 10:59:14 crc kubenswrapper[4619]: I0126 10:59:14.679454 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161"} err="failed to get container status \"c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\": rpc error: code = NotFound desc = could not find container \"c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161\": container with ID starting with c25e36e347eba937e09c30c64e8672fb62d5e648ef3b10132f39a68104aee161 not found: ID does not exist" Jan 26 10:59:14 crc kubenswrapper[4619]: I0126 10:59:14.679469 4619 scope.go:117] "RemoveContainer" containerID="f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654" Jan 26 10:59:14 crc kubenswrapper[4619]: E0126 10:59:14.680091 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\": container with ID starting with f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654 not found: ID does not exist" containerID="f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654" Jan 26 10:59:14 crc kubenswrapper[4619]: I0126 10:59:14.680186 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654"} err="failed to get container status \"f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\": rpc error: code = NotFound desc = could not find container \"f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654\": container with ID starting with f5d9272142fd919aa3324b879500362608730304ff911d765e028b0c208ef654 not found: ID does not exist" Jan 26 10:59:14 crc kubenswrapper[4619]: I0126 10:59:14.680217 4619 scope.go:117] "RemoveContainer" containerID="63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406" Jan 26 10:59:14 crc kubenswrapper[4619]: E0126 10:59:14.682456 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\": container with ID starting with 63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406 not found: ID does not exist" containerID="63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406" Jan 26 10:59:14 crc kubenswrapper[4619]: I0126 10:59:14.682520 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406"} err="failed to get container status \"63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\": rpc error: code = NotFound desc = could not find container \"63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406\": container with ID starting with 63f3726bf83ba0c3f364d40f3be11395b189927f34f86ed12a4088b7739e4406 not found: ID does not exist" Jan 26 10:59:14 crc kubenswrapper[4619]: I0126 10:59:14.682539 4619 scope.go:117] "RemoveContainer" containerID="c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512" Jan 26 10:59:14 crc kubenswrapper[4619]: E0126 10:59:14.683031 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\": container with ID starting with c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512 not found: ID does not exist" containerID="c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512" Jan 26 10:59:14 crc kubenswrapper[4619]: I0126 10:59:14.683082 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512"} err="failed to get container status \"c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\": rpc error: code = NotFound desc = could not find container \"c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512\": container with ID starting with c754db7aa47218d7f5bff5b5d46d2b63f2cd1e1c0dc61cb32cfb5fd14bb89512 not found: ID does not exist" Jan 26 10:59:14 crc kubenswrapper[4619]: I0126 10:59:14.683114 4619 scope.go:117] "RemoveContainer" containerID="882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47" Jan 26 10:59:14 crc kubenswrapper[4619]: E0126 10:59:14.683464 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\": container with ID starting with 882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47 not found: ID does not exist" containerID="882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47" Jan 26 10:59:14 crc kubenswrapper[4619]: I0126 10:59:14.683489 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47"} err="failed to get container status \"882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\": rpc error: code = NotFound desc = could not find container \"882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47\": container with ID starting with 882346161e264de9896888f7f843344be40e8382c0f15fdae5bc58c849cf0b47 not found: ID does not exist" Jan 26 10:59:15 crc kubenswrapper[4619]: I0126 10:59:15.269358 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 26 10:59:16 crc kubenswrapper[4619]: E0126 10:59:16.545685 4619 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:16 crc kubenswrapper[4619]: E0126 10:59:16.546256 4619 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:16 crc kubenswrapper[4619]: E0126 10:59:16.546431 4619 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:16 crc kubenswrapper[4619]: E0126 10:59:16.546572 4619 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:16 crc kubenswrapper[4619]: E0126 10:59:16.546732 4619 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:16 crc kubenswrapper[4619]: I0126 10:59:16.546749 4619 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 26 10:59:16 crc kubenswrapper[4619]: E0126 10:59:16.546893 4619 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="200ms" Jan 26 10:59:16 crc kubenswrapper[4619]: E0126 10:59:16.748186 4619 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="400ms" Jan 26 10:59:17 crc kubenswrapper[4619]: E0126 10:59:17.149563 4619 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="800ms" Jan 26 10:59:17 crc kubenswrapper[4619]: E0126 10:59:17.482646 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:59:17Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:59:17Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:59:17Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T10:59:17Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:17 crc kubenswrapper[4619]: E0126 10:59:17.484407 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:17 crc kubenswrapper[4619]: E0126 10:59:17.485144 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:17 crc kubenswrapper[4619]: E0126 10:59:17.485853 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:17 crc kubenswrapper[4619]: E0126 10:59:17.486441 4619 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:17 crc kubenswrapper[4619]: E0126 10:59:17.486688 4619 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 10:59:17 crc kubenswrapper[4619]: E0126 10:59:17.950350 4619 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="1.6s" Jan 26 10:59:19 crc kubenswrapper[4619]: E0126 10:59:19.551488 4619 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="3.2s" Jan 26 10:59:21 crc kubenswrapper[4619]: I0126 10:59:21.263478 4619 status_manager.go:851] "Failed to get status for pod" podUID="ebe1934c-ce73-47cb-8246-ce6f47742100" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:21 crc kubenswrapper[4619]: E0126 10:59:21.265880 4619 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.69:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e42ccf4bfc6ec openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 10:59:11.65015422 +0000 UTC m=+250.684194936,LastTimestamp:2026-01-26 10:59:11.65015422 +0000 UTC m=+250.684194936,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 10:59:22 crc kubenswrapper[4619]: I0126 10:59:22.480797 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" podUID="81afd38e-4b98-450d-89b1-06efe9f059e8" containerName="oauth-openshift" containerID="cri-o://818bfdeac739d610455806698c37644e1aca011fb7fac8241a9484c51ae20853" gracePeriod=15 Jan 26 10:59:22 crc kubenswrapper[4619]: E0126 10:59:22.753054 4619 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="6.4s" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.409043 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.410002 4619 status_manager.go:851] "Failed to get status for pod" podUID="81afd38e-4b98-450d-89b1-06efe9f059e8" pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-6jj4w\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.410468 4619 status_manager.go:851] "Failed to get status for pod" podUID="ebe1934c-ce73-47cb-8246-ce6f47742100" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.446281 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-user-template-error\") pod \"81afd38e-4b98-450d-89b1-06efe9f059e8\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.446425 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-user-template-provider-selection\") pod \"81afd38e-4b98-450d-89b1-06efe9f059e8\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.446506 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-user-idp-0-file-data\") pod \"81afd38e-4b98-450d-89b1-06efe9f059e8\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.446547 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-cliconfig\") pod \"81afd38e-4b98-450d-89b1-06efe9f059e8\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.446695 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-ocp-branding-template\") pod \"81afd38e-4b98-450d-89b1-06efe9f059e8\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.446764 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-serving-cert\") pod \"81afd38e-4b98-450d-89b1-06efe9f059e8\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.446820 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-trusted-ca-bundle\") pod \"81afd38e-4b98-450d-89b1-06efe9f059e8\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.446860 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-user-template-login\") pod \"81afd38e-4b98-450d-89b1-06efe9f059e8\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.446904 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/81afd38e-4b98-450d-89b1-06efe9f059e8-audit-dir\") pod \"81afd38e-4b98-450d-89b1-06efe9f059e8\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.446963 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-session\") pod \"81afd38e-4b98-450d-89b1-06efe9f059e8\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.447008 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8w5h\" (UniqueName: \"kubernetes.io/projected/81afd38e-4b98-450d-89b1-06efe9f059e8-kube-api-access-j8w5h\") pod \"81afd38e-4b98-450d-89b1-06efe9f059e8\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.447048 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-service-ca\") pod \"81afd38e-4b98-450d-89b1-06efe9f059e8\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.447107 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-router-certs\") pod \"81afd38e-4b98-450d-89b1-06efe9f059e8\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.447184 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/81afd38e-4b98-450d-89b1-06efe9f059e8-audit-policies\") pod \"81afd38e-4b98-450d-89b1-06efe9f059e8\" (UID: \"81afd38e-4b98-450d-89b1-06efe9f059e8\") " Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.447507 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "81afd38e-4b98-450d-89b1-06efe9f059e8" (UID: "81afd38e-4b98-450d-89b1-06efe9f059e8"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.447533 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "81afd38e-4b98-450d-89b1-06efe9f059e8" (UID: "81afd38e-4b98-450d-89b1-06efe9f059e8"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.447709 4619 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.447723 4619 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.448242 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "81afd38e-4b98-450d-89b1-06efe9f059e8" (UID: "81afd38e-4b98-450d-89b1-06efe9f059e8"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.448656 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81afd38e-4b98-450d-89b1-06efe9f059e8-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "81afd38e-4b98-450d-89b1-06efe9f059e8" (UID: "81afd38e-4b98-450d-89b1-06efe9f059e8"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.453358 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "81afd38e-4b98-450d-89b1-06efe9f059e8" (UID: "81afd38e-4b98-450d-89b1-06efe9f059e8"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.453717 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "81afd38e-4b98-450d-89b1-06efe9f059e8" (UID: "81afd38e-4b98-450d-89b1-06efe9f059e8"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.453798 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81afd38e-4b98-450d-89b1-06efe9f059e8-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "81afd38e-4b98-450d-89b1-06efe9f059e8" (UID: "81afd38e-4b98-450d-89b1-06efe9f059e8"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.456085 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81afd38e-4b98-450d-89b1-06efe9f059e8-kube-api-access-j8w5h" (OuterVolumeSpecName: "kube-api-access-j8w5h") pod "81afd38e-4b98-450d-89b1-06efe9f059e8" (UID: "81afd38e-4b98-450d-89b1-06efe9f059e8"). InnerVolumeSpecName "kube-api-access-j8w5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.456887 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "81afd38e-4b98-450d-89b1-06efe9f059e8" (UID: "81afd38e-4b98-450d-89b1-06efe9f059e8"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.461810 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "81afd38e-4b98-450d-89b1-06efe9f059e8" (UID: "81afd38e-4b98-450d-89b1-06efe9f059e8"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.462163 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "81afd38e-4b98-450d-89b1-06efe9f059e8" (UID: "81afd38e-4b98-450d-89b1-06efe9f059e8"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.462455 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "81afd38e-4b98-450d-89b1-06efe9f059e8" (UID: "81afd38e-4b98-450d-89b1-06efe9f059e8"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.462706 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "81afd38e-4b98-450d-89b1-06efe9f059e8" (UID: "81afd38e-4b98-450d-89b1-06efe9f059e8"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.464070 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "81afd38e-4b98-450d-89b1-06efe9f059e8" (UID: "81afd38e-4b98-450d-89b1-06efe9f059e8"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.549778 4619 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.549839 4619 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.549853 4619 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.549867 4619 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.549879 4619 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.549893 4619 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/81afd38e-4b98-450d-89b1-06efe9f059e8-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.549910 4619 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.549921 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8w5h\" (UniqueName: \"kubernetes.io/projected/81afd38e-4b98-450d-89b1-06efe9f059e8-kube-api-access-j8w5h\") on node \"crc\" DevicePath \"\"" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.549931 4619 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.549942 4619 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.549954 4619 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/81afd38e-4b98-450d-89b1-06efe9f059e8-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.549967 4619 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/81afd38e-4b98-450d-89b1-06efe9f059e8-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.618483 4619 generic.go:334] "Generic (PLEG): container finished" podID="81afd38e-4b98-450d-89b1-06efe9f059e8" containerID="818bfdeac739d610455806698c37644e1aca011fb7fac8241a9484c51ae20853" exitCode=0 Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.618564 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.618598 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" event={"ID":"81afd38e-4b98-450d-89b1-06efe9f059e8","Type":"ContainerDied","Data":"818bfdeac739d610455806698c37644e1aca011fb7fac8241a9484c51ae20853"} Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.619111 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" event={"ID":"81afd38e-4b98-450d-89b1-06efe9f059e8","Type":"ContainerDied","Data":"c31b8e116cad99a33440d2d96143a07b4a137850bebd5b3c70d076ef264f0a13"} Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.619159 4619 scope.go:117] "RemoveContainer" containerID="818bfdeac739d610455806698c37644e1aca011fb7fac8241a9484c51ae20853" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.620255 4619 status_manager.go:851] "Failed to get status for pod" podUID="81afd38e-4b98-450d-89b1-06efe9f059e8" pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-6jj4w\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.620474 4619 status_manager.go:851] "Failed to get status for pod" podUID="ebe1934c-ce73-47cb-8246-ce6f47742100" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.636209 4619 status_manager.go:851] "Failed to get status for pod" podUID="81afd38e-4b98-450d-89b1-06efe9f059e8" pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-6jj4w\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.636979 4619 status_manager.go:851] "Failed to get status for pod" podUID="ebe1934c-ce73-47cb-8246-ce6f47742100" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.644526 4619 scope.go:117] "RemoveContainer" containerID="818bfdeac739d610455806698c37644e1aca011fb7fac8241a9484c51ae20853" Jan 26 10:59:23 crc kubenswrapper[4619]: E0126 10:59:23.645127 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"818bfdeac739d610455806698c37644e1aca011fb7fac8241a9484c51ae20853\": container with ID starting with 818bfdeac739d610455806698c37644e1aca011fb7fac8241a9484c51ae20853 not found: ID does not exist" containerID="818bfdeac739d610455806698c37644e1aca011fb7fac8241a9484c51ae20853" Jan 26 10:59:23 crc kubenswrapper[4619]: I0126 10:59:23.645275 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"818bfdeac739d610455806698c37644e1aca011fb7fac8241a9484c51ae20853"} err="failed to get container status \"818bfdeac739d610455806698c37644e1aca011fb7fac8241a9484c51ae20853\": rpc error: code = NotFound desc = could not find container \"818bfdeac739d610455806698c37644e1aca011fb7fac8241a9484c51ae20853\": container with ID starting with 818bfdeac739d610455806698c37644e1aca011fb7fac8241a9484c51ae20853 not found: ID does not exist" Jan 26 10:59:24 crc kubenswrapper[4619]: I0126 10:59:24.632197 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 10:59:24 crc kubenswrapper[4619]: I0126 10:59:24.632262 4619 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed" exitCode=1 Jan 26 10:59:24 crc kubenswrapper[4619]: I0126 10:59:24.632303 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed"} Jan 26 10:59:24 crc kubenswrapper[4619]: I0126 10:59:24.632889 4619 scope.go:117] "RemoveContainer" containerID="74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed" Jan 26 10:59:24 crc kubenswrapper[4619]: I0126 10:59:24.634836 4619 status_manager.go:851] "Failed to get status for pod" podUID="ebe1934c-ce73-47cb-8246-ce6f47742100" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:24 crc kubenswrapper[4619]: I0126 10:59:24.635093 4619 status_manager.go:851] "Failed to get status for pod" podUID="81afd38e-4b98-450d-89b1-06efe9f059e8" pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-6jj4w\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:24 crc kubenswrapper[4619]: I0126 10:59:24.635412 4619 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:25 crc kubenswrapper[4619]: I0126 10:59:25.594761 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 10:59:25 crc kubenswrapper[4619]: I0126 10:59:25.642506 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 10:59:25 crc kubenswrapper[4619]: I0126 10:59:25.642595 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e1747897c0a0e47d64b70ffe3a4490f079d5cb8efbffdadf44690f1e7e384f24"} Jan 26 10:59:25 crc kubenswrapper[4619]: I0126 10:59:25.661479 4619 status_manager.go:851] "Failed to get status for pod" podUID="81afd38e-4b98-450d-89b1-06efe9f059e8" pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-6jj4w\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:25 crc kubenswrapper[4619]: I0126 10:59:25.662993 4619 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:25 crc kubenswrapper[4619]: I0126 10:59:25.664586 4619 status_manager.go:851] "Failed to get status for pod" podUID="ebe1934c-ce73-47cb-8246-ce6f47742100" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:26 crc kubenswrapper[4619]: I0126 10:59:26.260723 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:59:26 crc kubenswrapper[4619]: I0126 10:59:26.261522 4619 status_manager.go:851] "Failed to get status for pod" podUID="81afd38e-4b98-450d-89b1-06efe9f059e8" pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-6jj4w\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:26 crc kubenswrapper[4619]: I0126 10:59:26.261768 4619 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:26 crc kubenswrapper[4619]: I0126 10:59:26.262145 4619 status_manager.go:851] "Failed to get status for pod" podUID="ebe1934c-ce73-47cb-8246-ce6f47742100" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:26 crc kubenswrapper[4619]: I0126 10:59:26.274225 4619 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0f41b65e-88fb-45c3-a959-984e44525720" Jan 26 10:59:26 crc kubenswrapper[4619]: I0126 10:59:26.274266 4619 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0f41b65e-88fb-45c3-a959-984e44525720" Jan 26 10:59:26 crc kubenswrapper[4619]: E0126 10:59:26.274749 4619 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:59:26 crc kubenswrapper[4619]: I0126 10:59:26.275404 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:59:26 crc kubenswrapper[4619]: W0126 10:59:26.300544 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-a0e8e5a6a3dc456f40cd7e4ecb293337806781755082f15e6f6af65adcc4f4c0 WatchSource:0}: Error finding container a0e8e5a6a3dc456f40cd7e4ecb293337806781755082f15e6f6af65adcc4f4c0: Status 404 returned error can't find the container with id a0e8e5a6a3dc456f40cd7e4ecb293337806781755082f15e6f6af65adcc4f4c0 Jan 26 10:59:26 crc kubenswrapper[4619]: I0126 10:59:26.661177 4619 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="44a5596ffaef0101239aa999ce0461631a13e555d1415fadeaf443752ae9d79b" exitCode=0 Jan 26 10:59:26 crc kubenswrapper[4619]: I0126 10:59:26.661330 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"44a5596ffaef0101239aa999ce0461631a13e555d1415fadeaf443752ae9d79b"} Jan 26 10:59:26 crc kubenswrapper[4619]: I0126 10:59:26.661916 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a0e8e5a6a3dc456f40cd7e4ecb293337806781755082f15e6f6af65adcc4f4c0"} Jan 26 10:59:26 crc kubenswrapper[4619]: I0126 10:59:26.662499 4619 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0f41b65e-88fb-45c3-a959-984e44525720" Jan 26 10:59:26 crc kubenswrapper[4619]: I0126 10:59:26.662541 4619 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0f41b65e-88fb-45c3-a959-984e44525720" Jan 26 10:59:26 crc kubenswrapper[4619]: E0126 10:59:26.662968 4619 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:59:26 crc kubenswrapper[4619]: I0126 10:59:26.663377 4619 status_manager.go:851] "Failed to get status for pod" podUID="ebe1934c-ce73-47cb-8246-ce6f47742100" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:26 crc kubenswrapper[4619]: I0126 10:59:26.663785 4619 status_manager.go:851] "Failed to get status for pod" podUID="81afd38e-4b98-450d-89b1-06efe9f059e8" pod="openshift-authentication/oauth-openshift-558db77b4-6jj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-6jj4w\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:26 crc kubenswrapper[4619]: I0126 10:59:26.664015 4619 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 10:59:27 crc kubenswrapper[4619]: I0126 10:59:27.528770 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 10:59:27 crc kubenswrapper[4619]: I0126 10:59:27.529208 4619 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 26 10:59:27 crc kubenswrapper[4619]: I0126 10:59:27.529291 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 26 10:59:27 crc kubenswrapper[4619]: I0126 10:59:27.671088 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"040180497be2ff5fef2bf4f7034d50231d06cf5d1b6c350107daf8b6924e9f94"} Jan 26 10:59:27 crc kubenswrapper[4619]: I0126 10:59:27.671167 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c51a2b8bc164890a51afad857d57c3ce57ccc1cec488473c5f9e63ab64e37730"} Jan 26 10:59:27 crc kubenswrapper[4619]: I0126 10:59:27.671178 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"22dafe8de0fe88e618475685a3aee133ead3f935344862dd1b7d6e6993f005d4"} Jan 26 10:59:28 crc kubenswrapper[4619]: I0126 10:59:28.681343 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c0ef0321d96050759fdba6801db0968e94fdd0b534d2731e904e980d6b31ccd2"} Jan 26 10:59:28 crc kubenswrapper[4619]: I0126 10:59:28.681681 4619 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0f41b65e-88fb-45c3-a959-984e44525720" Jan 26 10:59:28 crc kubenswrapper[4619]: I0126 10:59:28.681976 4619 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0f41b65e-88fb-45c3-a959-984e44525720" Jan 26 10:59:28 crc kubenswrapper[4619]: I0126 10:59:28.681918 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d132819a8955d88c435dadde19bcd7d9a8aa852d768adf6edeb16d2a08672895"} Jan 26 10:59:28 crc kubenswrapper[4619]: I0126 10:59:28.682261 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:59:31 crc kubenswrapper[4619]: I0126 10:59:31.275571 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:59:31 crc kubenswrapper[4619]: I0126 10:59:31.275951 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:59:31 crc kubenswrapper[4619]: I0126 10:59:31.288309 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:59:33 crc kubenswrapper[4619]: I0126 10:59:33.696592 4619 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:59:33 crc kubenswrapper[4619]: I0126 10:59:33.817182 4619 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b3158007-5af7-4bfe-b786-c3821ea6ca59" Jan 26 10:59:34 crc kubenswrapper[4619]: I0126 10:59:34.719809 4619 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0f41b65e-88fb-45c3-a959-984e44525720" Jan 26 10:59:34 crc kubenswrapper[4619]: I0126 10:59:34.722226 4619 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0f41b65e-88fb-45c3-a959-984e44525720" Jan 26 10:59:34 crc kubenswrapper[4619]: I0126 10:59:34.723498 4619 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b3158007-5af7-4bfe-b786-c3821ea6ca59" Jan 26 10:59:34 crc kubenswrapper[4619]: I0126 10:59:34.724272 4619 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://22dafe8de0fe88e618475685a3aee133ead3f935344862dd1b7d6e6993f005d4" Jan 26 10:59:34 crc kubenswrapper[4619]: I0126 10:59:34.724288 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:59:35 crc kubenswrapper[4619]: I0126 10:59:35.594665 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 10:59:35 crc kubenswrapper[4619]: I0126 10:59:35.724311 4619 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0f41b65e-88fb-45c3-a959-984e44525720" Jan 26 10:59:35 crc kubenswrapper[4619]: I0126 10:59:35.724341 4619 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0f41b65e-88fb-45c3-a959-984e44525720" Jan 26 10:59:35 crc kubenswrapper[4619]: I0126 10:59:35.729876 4619 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b3158007-5af7-4bfe-b786-c3821ea6ca59" Jan 26 10:59:37 crc kubenswrapper[4619]: I0126 10:59:37.528996 4619 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 26 10:59:37 crc kubenswrapper[4619]: I0126 10:59:37.529059 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 26 10:59:43 crc kubenswrapper[4619]: I0126 10:59:43.244098 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 10:59:43 crc kubenswrapper[4619]: I0126 10:59:43.614233 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 10:59:43 crc kubenswrapper[4619]: I0126 10:59:43.864043 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 10:59:44 crc kubenswrapper[4619]: I0126 10:59:44.057209 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 10:59:44 crc kubenswrapper[4619]: I0126 10:59:44.154986 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 10:59:44 crc kubenswrapper[4619]: I0126 10:59:44.355401 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 10:59:44 crc kubenswrapper[4619]: I0126 10:59:44.502590 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 10:59:44 crc kubenswrapper[4619]: I0126 10:59:44.631078 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 10:59:44 crc kubenswrapper[4619]: I0126 10:59:44.758456 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 10:59:44 crc kubenswrapper[4619]: I0126 10:59:44.814037 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 10:59:44 crc kubenswrapper[4619]: I0126 10:59:44.850961 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 10:59:45 crc kubenswrapper[4619]: I0126 10:59:45.107728 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 10:59:45 crc kubenswrapper[4619]: I0126 10:59:45.170150 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 10:59:45 crc kubenswrapper[4619]: I0126 10:59:45.243655 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 10:59:45 crc kubenswrapper[4619]: I0126 10:59:45.291894 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 10:59:45 crc kubenswrapper[4619]: I0126 10:59:45.368967 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 10:59:45 crc kubenswrapper[4619]: I0126 10:59:45.382509 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 10:59:45 crc kubenswrapper[4619]: I0126 10:59:45.394758 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 10:59:45 crc kubenswrapper[4619]: I0126 10:59:45.412654 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 10:59:45 crc kubenswrapper[4619]: I0126 10:59:45.600514 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 10:59:45 crc kubenswrapper[4619]: I0126 10:59:45.615963 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 10:59:45 crc kubenswrapper[4619]: I0126 10:59:45.642724 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 10:59:45 crc kubenswrapper[4619]: I0126 10:59:45.756015 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 10:59:45 crc kubenswrapper[4619]: I0126 10:59:45.765606 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 10:59:45 crc kubenswrapper[4619]: I0126 10:59:45.907181 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 10:59:46 crc kubenswrapper[4619]: I0126 10:59:46.101278 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 10:59:46 crc kubenswrapper[4619]: I0126 10:59:46.154502 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 10:59:46 crc kubenswrapper[4619]: I0126 10:59:46.201968 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 10:59:46 crc kubenswrapper[4619]: I0126 10:59:46.271457 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 10:59:46 crc kubenswrapper[4619]: I0126 10:59:46.546784 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 10:59:46 crc kubenswrapper[4619]: I0126 10:59:46.577012 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 10:59:46 crc kubenswrapper[4619]: I0126 10:59:46.724293 4619 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 10:59:46 crc kubenswrapper[4619]: I0126 10:59:46.729467 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-6jj4w"] Jan 26 10:59:46 crc kubenswrapper[4619]: I0126 10:59:46.729777 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 10:59:46 crc kubenswrapper[4619]: I0126 10:59:46.735116 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 10:59:46 crc kubenswrapper[4619]: I0126 10:59:46.773286 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=13.773264844 podStartE2EDuration="13.773264844s" podCreationTimestamp="2026-01-26 10:59:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:59:46.75422447 +0000 UTC m=+285.788265206" watchObservedRunningTime="2026-01-26 10:59:46.773264844 +0000 UTC m=+285.807305560" Jan 26 10:59:46 crc kubenswrapper[4619]: I0126 10:59:46.775107 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 10:59:46 crc kubenswrapper[4619]: I0126 10:59:46.900840 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 10:59:47 crc kubenswrapper[4619]: I0126 10:59:47.039209 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 10:59:47 crc kubenswrapper[4619]: I0126 10:59:47.050238 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 10:59:47 crc kubenswrapper[4619]: I0126 10:59:47.057003 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 10:59:47 crc kubenswrapper[4619]: I0126 10:59:47.268987 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81afd38e-4b98-450d-89b1-06efe9f059e8" path="/var/lib/kubelet/pods/81afd38e-4b98-450d-89b1-06efe9f059e8/volumes" Jan 26 10:59:47 crc kubenswrapper[4619]: I0126 10:59:47.459685 4619 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 10:59:47 crc kubenswrapper[4619]: I0126 10:59:47.466017 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 10:59:47 crc kubenswrapper[4619]: I0126 10:59:47.481414 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 10:59:47 crc kubenswrapper[4619]: I0126 10:59:47.516515 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 10:59:47 crc kubenswrapper[4619]: I0126 10:59:47.528885 4619 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 26 10:59:47 crc kubenswrapper[4619]: I0126 10:59:47.528958 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 26 10:59:47 crc kubenswrapper[4619]: I0126 10:59:47.529045 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 10:59:47 crc kubenswrapper[4619]: I0126 10:59:47.529993 4619 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"e1747897c0a0e47d64b70ffe3a4490f079d5cb8efbffdadf44690f1e7e384f24"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 26 10:59:47 crc kubenswrapper[4619]: I0126 10:59:47.530254 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://e1747897c0a0e47d64b70ffe3a4490f079d5cb8efbffdadf44690f1e7e384f24" gracePeriod=30 Jan 26 10:59:47 crc kubenswrapper[4619]: I0126 10:59:47.565066 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 10:59:47 crc kubenswrapper[4619]: I0126 10:59:47.574473 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 10:59:47 crc kubenswrapper[4619]: I0126 10:59:47.623049 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 10:59:47 crc kubenswrapper[4619]: I0126 10:59:47.676331 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 10:59:47 crc kubenswrapper[4619]: I0126 10:59:47.833454 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 10:59:47 crc kubenswrapper[4619]: I0126 10:59:47.834546 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 10:59:47 crc kubenswrapper[4619]: I0126 10:59:47.835439 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 10:59:47 crc kubenswrapper[4619]: I0126 10:59:47.864724 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 10:59:47 crc kubenswrapper[4619]: I0126 10:59:47.930604 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 10:59:47 crc kubenswrapper[4619]: I0126 10:59:47.950129 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 10:59:48 crc kubenswrapper[4619]: I0126 10:59:48.116122 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 10:59:48 crc kubenswrapper[4619]: I0126 10:59:48.264571 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 10:59:48 crc kubenswrapper[4619]: I0126 10:59:48.284108 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 10:59:48 crc kubenswrapper[4619]: I0126 10:59:48.365847 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 10:59:48 crc kubenswrapper[4619]: I0126 10:59:48.407438 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 10:59:48 crc kubenswrapper[4619]: I0126 10:59:48.483022 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 10:59:48 crc kubenswrapper[4619]: I0126 10:59:48.488237 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 10:59:48 crc kubenswrapper[4619]: I0126 10:59:48.501312 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 10:59:48 crc kubenswrapper[4619]: I0126 10:59:48.607524 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 10:59:48 crc kubenswrapper[4619]: I0126 10:59:48.624812 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 10:59:48 crc kubenswrapper[4619]: I0126 10:59:48.625649 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 10:59:48 crc kubenswrapper[4619]: I0126 10:59:48.629300 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 10:59:48 crc kubenswrapper[4619]: I0126 10:59:48.645240 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 10:59:48 crc kubenswrapper[4619]: I0126 10:59:48.731765 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 10:59:48 crc kubenswrapper[4619]: I0126 10:59:48.740410 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 10:59:48 crc kubenswrapper[4619]: I0126 10:59:48.746028 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 10:59:48 crc kubenswrapper[4619]: I0126 10:59:48.750847 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 10:59:48 crc kubenswrapper[4619]: I0126 10:59:48.751736 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 10:59:48 crc kubenswrapper[4619]: I0126 10:59:48.866138 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 10:59:48 crc kubenswrapper[4619]: I0126 10:59:48.905250 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 10:59:48 crc kubenswrapper[4619]: I0126 10:59:48.908723 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 10:59:49 crc kubenswrapper[4619]: I0126 10:59:49.102875 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 10:59:49 crc kubenswrapper[4619]: I0126 10:59:49.126027 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 10:59:49 crc kubenswrapper[4619]: I0126 10:59:49.139547 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 10:59:49 crc kubenswrapper[4619]: I0126 10:59:49.182847 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 10:59:49 crc kubenswrapper[4619]: I0126 10:59:49.248395 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 10:59:49 crc kubenswrapper[4619]: I0126 10:59:49.268053 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 10:59:49 crc kubenswrapper[4619]: I0126 10:59:49.299381 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 10:59:49 crc kubenswrapper[4619]: I0126 10:59:49.305736 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 10:59:49 crc kubenswrapper[4619]: I0126 10:59:49.381005 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 10:59:49 crc kubenswrapper[4619]: I0126 10:59:49.418744 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 10:59:49 crc kubenswrapper[4619]: I0126 10:59:49.471398 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 10:59:49 crc kubenswrapper[4619]: I0126 10:59:49.480459 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 10:59:49 crc kubenswrapper[4619]: I0126 10:59:49.506679 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 10:59:49 crc kubenswrapper[4619]: I0126 10:59:49.527843 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 10:59:49 crc kubenswrapper[4619]: I0126 10:59:49.645216 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 10:59:49 crc kubenswrapper[4619]: I0126 10:59:49.651451 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 10:59:49 crc kubenswrapper[4619]: I0126 10:59:49.759051 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 10:59:49 crc kubenswrapper[4619]: I0126 10:59:49.791038 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 10:59:49 crc kubenswrapper[4619]: I0126 10:59:49.792514 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 10:59:49 crc kubenswrapper[4619]: I0126 10:59:49.835029 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 10:59:49 crc kubenswrapper[4619]: I0126 10:59:49.861574 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 10:59:49 crc kubenswrapper[4619]: I0126 10:59:49.946170 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 10:59:49 crc kubenswrapper[4619]: I0126 10:59:49.960026 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 10:59:49 crc kubenswrapper[4619]: I0126 10:59:49.982221 4619 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 10:59:49 crc kubenswrapper[4619]: I0126 10:59:49.993668 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 10:59:50 crc kubenswrapper[4619]: I0126 10:59:50.088139 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 10:59:50 crc kubenswrapper[4619]: I0126 10:59:50.138538 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 10:59:50 crc kubenswrapper[4619]: I0126 10:59:50.156630 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 10:59:50 crc kubenswrapper[4619]: I0126 10:59:50.283301 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 10:59:50 crc kubenswrapper[4619]: I0126 10:59:50.298640 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 10:59:50 crc kubenswrapper[4619]: I0126 10:59:50.410359 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 10:59:50 crc kubenswrapper[4619]: I0126 10:59:50.543423 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 10:59:50 crc kubenswrapper[4619]: I0126 10:59:50.548086 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 10:59:50 crc kubenswrapper[4619]: I0126 10:59:50.564946 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 10:59:50 crc kubenswrapper[4619]: I0126 10:59:50.660285 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 10:59:50 crc kubenswrapper[4619]: I0126 10:59:50.799581 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 10:59:50 crc kubenswrapper[4619]: I0126 10:59:50.817104 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 10:59:50 crc kubenswrapper[4619]: I0126 10:59:50.903549 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 10:59:50 crc kubenswrapper[4619]: I0126 10:59:50.926862 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 10:59:50 crc kubenswrapper[4619]: I0126 10:59:50.954704 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 10:59:50 crc kubenswrapper[4619]: I0126 10:59:50.955383 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.023025 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.042866 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.129924 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.183488 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.232337 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.375962 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.399541 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.422962 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.428438 4619 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.457200 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.490286 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.586061 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.648135 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.675351 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.696439 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.904922 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66456c6bb-mrdjf"] Jan 26 10:59:51 crc kubenswrapper[4619]: E0126 10:59:51.905270 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81afd38e-4b98-450d-89b1-06efe9f059e8" containerName="oauth-openshift" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.905322 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="81afd38e-4b98-450d-89b1-06efe9f059e8" containerName="oauth-openshift" Jan 26 10:59:51 crc kubenswrapper[4619]: E0126 10:59:51.905335 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebe1934c-ce73-47cb-8246-ce6f47742100" containerName="installer" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.905344 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebe1934c-ce73-47cb-8246-ce6f47742100" containerName="installer" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.905552 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebe1934c-ce73-47cb-8246-ce6f47742100" containerName="installer" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.905575 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="81afd38e-4b98-450d-89b1-06efe9f059e8" containerName="oauth-openshift" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.906135 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.911300 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.911609 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.911728 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.911753 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.911872 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.911891 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.912024 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.912134 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.912225 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.912490 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.913303 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.914890 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.915882 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66456c6bb-mrdjf"] Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.921170 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69drc\" (UniqueName: \"kubernetes.io/projected/d742b507-8363-4c32-82b8-4886c3637ccb-kube-api-access-69drc\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.921230 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.921266 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-system-session\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.921288 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-user-template-login\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.921314 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-system-router-certs\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.921353 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-system-service-ca\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.921377 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.921408 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d742b507-8363-4c32-82b8-4886c3637ccb-audit-dir\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.921437 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-user-template-error\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.921460 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.921482 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.921501 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.921525 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d742b507-8363-4c32-82b8-4886c3637ccb-audit-policies\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.921555 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.922785 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.923637 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 10:59:51 crc kubenswrapper[4619]: I0126 10:59:51.925996 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.022462 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69drc\" (UniqueName: \"kubernetes.io/projected/d742b507-8363-4c32-82b8-4886c3637ccb-kube-api-access-69drc\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.022622 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.022658 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-system-session\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.022680 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-user-template-login\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.022708 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-system-router-certs\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.023076 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-system-service-ca\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.023738 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.023804 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d742b507-8363-4c32-82b8-4886c3637ccb-audit-dir\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.023881 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-user-template-error\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.023916 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.023943 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.023970 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.023997 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d742b507-8363-4c32-82b8-4886c3637ccb-audit-policies\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.024035 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.024189 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-system-service-ca\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.024903 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.024973 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d742b507-8363-4c32-82b8-4886c3637ccb-audit-dir\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.025365 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d742b507-8363-4c32-82b8-4886c3637ccb-audit-policies\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.025436 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.028217 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-user-template-error\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.028555 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.028601 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.029078 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-user-template-login\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.031196 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-system-session\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.037934 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-system-router-certs\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.041315 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.043156 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d742b507-8363-4c32-82b8-4886c3637ccb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.054370 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69drc\" (UniqueName: \"kubernetes.io/projected/d742b507-8363-4c32-82b8-4886c3637ccb-kube-api-access-69drc\") pod \"oauth-openshift-66456c6bb-mrdjf\" (UID: \"d742b507-8363-4c32-82b8-4886c3637ccb\") " pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.228654 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.312077 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.379673 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.505322 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.548234 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.573319 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.581696 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.619389 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.623902 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.670986 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.671841 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.724570 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.732118 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.825268 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.839660 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.893207 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.901045 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.981693 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 10:59:52 crc kubenswrapper[4619]: I0126 10:59:52.997550 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 10:59:53 crc kubenswrapper[4619]: I0126 10:59:53.008193 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 10:59:53 crc kubenswrapper[4619]: I0126 10:59:53.026438 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 10:59:53 crc kubenswrapper[4619]: I0126 10:59:53.084187 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 10:59:53 crc kubenswrapper[4619]: I0126 10:59:53.328168 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 10:59:53 crc kubenswrapper[4619]: I0126 10:59:53.460179 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 10:59:53 crc kubenswrapper[4619]: I0126 10:59:53.541924 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 10:59:53 crc kubenswrapper[4619]: I0126 10:59:53.554882 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 10:59:53 crc kubenswrapper[4619]: I0126 10:59:53.627906 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 10:59:53 crc kubenswrapper[4619]: I0126 10:59:53.750425 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 10:59:53 crc kubenswrapper[4619]: I0126 10:59:53.909560 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 10:59:54 crc kubenswrapper[4619]: I0126 10:59:54.042498 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 10:59:54 crc kubenswrapper[4619]: I0126 10:59:54.114472 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 10:59:54 crc kubenswrapper[4619]: I0126 10:59:54.155287 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 10:59:54 crc kubenswrapper[4619]: I0126 10:59:54.225185 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 10:59:54 crc kubenswrapper[4619]: I0126 10:59:54.316028 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 10:59:54 crc kubenswrapper[4619]: I0126 10:59:54.336171 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 10:59:54 crc kubenswrapper[4619]: I0126 10:59:54.386018 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 10:59:54 crc kubenswrapper[4619]: I0126 10:59:54.426999 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 10:59:54 crc kubenswrapper[4619]: I0126 10:59:54.485010 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 10:59:54 crc kubenswrapper[4619]: I0126 10:59:54.520332 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66456c6bb-mrdjf"] Jan 26 10:59:54 crc kubenswrapper[4619]: I0126 10:59:54.629977 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 10:59:54 crc kubenswrapper[4619]: I0126 10:59:54.635817 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 10:59:54 crc kubenswrapper[4619]: I0126 10:59:54.669342 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 10:59:54 crc kubenswrapper[4619]: I0126 10:59:54.684542 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 10:59:54 crc kubenswrapper[4619]: I0126 10:59:54.687687 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 10:59:54 crc kubenswrapper[4619]: I0126 10:59:54.697060 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 10:59:54 crc kubenswrapper[4619]: I0126 10:59:54.833573 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 10:59:54 crc kubenswrapper[4619]: I0126 10:59:54.836022 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" event={"ID":"d742b507-8363-4c32-82b8-4886c3637ccb","Type":"ContainerStarted","Data":"08d43d032533bfc23b70812466cdddb3a432956ced5287591476ed8c1116aecf"} Jan 26 10:59:54 crc kubenswrapper[4619]: I0126 10:59:54.848931 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 10:59:54 crc kubenswrapper[4619]: I0126 10:59:54.858654 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 10:59:54 crc kubenswrapper[4619]: I0126 10:59:54.879552 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 10:59:54 crc kubenswrapper[4619]: I0126 10:59:54.983131 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 10:59:54 crc kubenswrapper[4619]: I0126 10:59:54.991911 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 10:59:54 crc kubenswrapper[4619]: I0126 10:59:54.992461 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 10:59:55 crc kubenswrapper[4619]: I0126 10:59:55.024697 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 10:59:55 crc kubenswrapper[4619]: I0126 10:59:55.201990 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 10:59:55 crc kubenswrapper[4619]: I0126 10:59:55.216231 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 10:59:55 crc kubenswrapper[4619]: I0126 10:59:55.381324 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 10:59:55 crc kubenswrapper[4619]: I0126 10:59:55.437262 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 10:59:55 crc kubenswrapper[4619]: I0126 10:59:55.446698 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 10:59:55 crc kubenswrapper[4619]: I0126 10:59:55.511808 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 10:59:55 crc kubenswrapper[4619]: I0126 10:59:55.521850 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 10:59:55 crc kubenswrapper[4619]: I0126 10:59:55.588845 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 10:59:55 crc kubenswrapper[4619]: I0126 10:59:55.609304 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 10:59:55 crc kubenswrapper[4619]: I0126 10:59:55.609563 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 10:59:55 crc kubenswrapper[4619]: I0126 10:59:55.612444 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 10:59:55 crc kubenswrapper[4619]: I0126 10:59:55.671036 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 10:59:55 crc kubenswrapper[4619]: I0126 10:59:55.708297 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 10:59:55 crc kubenswrapper[4619]: I0126 10:59:55.760690 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 10:59:55 crc kubenswrapper[4619]: I0126 10:59:55.783392 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 10:59:55 crc kubenswrapper[4619]: I0126 10:59:55.800269 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 10:59:55 crc kubenswrapper[4619]: I0126 10:59:55.859654 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" event={"ID":"d742b507-8363-4c32-82b8-4886c3637ccb","Type":"ContainerStarted","Data":"145efec9d67975b14f7b6d329c54369e35e311b30bd3d40ec3a0425f20858ae2"} Jan 26 10:59:55 crc kubenswrapper[4619]: I0126 10:59:55.860154 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:55 crc kubenswrapper[4619]: I0126 10:59:55.867160 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" Jan 26 10:59:55 crc kubenswrapper[4619]: I0126 10:59:55.874918 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 10:59:55 crc kubenswrapper[4619]: I0126 10:59:55.895678 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66456c6bb-mrdjf" podStartSLOduration=58.895590761 podStartE2EDuration="58.895590761s" podCreationTimestamp="2026-01-26 10:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 10:59:55.892170136 +0000 UTC m=+294.926210872" watchObservedRunningTime="2026-01-26 10:59:55.895590761 +0000 UTC m=+294.929631477" Jan 26 10:59:55 crc kubenswrapper[4619]: I0126 10:59:55.970008 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 10:59:56 crc kubenswrapper[4619]: I0126 10:59:56.073947 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 10:59:56 crc kubenswrapper[4619]: I0126 10:59:56.295522 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 10:59:56 crc kubenswrapper[4619]: I0126 10:59:56.296108 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 10:59:56 crc kubenswrapper[4619]: I0126 10:59:56.396003 4619 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 10:59:56 crc kubenswrapper[4619]: I0126 10:59:56.396415 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://5aa41818a55e10bc883950638bbf126ebf2925aa4b8fbe685ed139998df17277" gracePeriod=5 Jan 26 10:59:56 crc kubenswrapper[4619]: I0126 10:59:56.492953 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 10:59:56 crc kubenswrapper[4619]: I0126 10:59:56.625324 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 10:59:56 crc kubenswrapper[4619]: I0126 10:59:56.629974 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 10:59:56 crc kubenswrapper[4619]: I0126 10:59:56.635780 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 10:59:56 crc kubenswrapper[4619]: I0126 10:59:56.655290 4619 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 10:59:56 crc kubenswrapper[4619]: I0126 10:59:56.665006 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 10:59:56 crc kubenswrapper[4619]: I0126 10:59:56.770068 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 10:59:56 crc kubenswrapper[4619]: I0126 10:59:56.844651 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 10:59:56 crc kubenswrapper[4619]: I0126 10:59:56.851011 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 10:59:56 crc kubenswrapper[4619]: I0126 10:59:56.946548 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 10:59:57 crc kubenswrapper[4619]: I0126 10:59:57.064967 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 10:59:57 crc kubenswrapper[4619]: I0126 10:59:57.212490 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 10:59:57 crc kubenswrapper[4619]: I0126 10:59:57.463914 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 10:59:57 crc kubenswrapper[4619]: I0126 10:59:57.467354 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 10:59:57 crc kubenswrapper[4619]: I0126 10:59:57.471000 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 10:59:57 crc kubenswrapper[4619]: I0126 10:59:57.507008 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 10:59:57 crc kubenswrapper[4619]: I0126 10:59:57.655910 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 10:59:57 crc kubenswrapper[4619]: I0126 10:59:57.725379 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 10:59:57 crc kubenswrapper[4619]: I0126 10:59:57.758540 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 10:59:57 crc kubenswrapper[4619]: I0126 10:59:57.905164 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 10:59:58 crc kubenswrapper[4619]: I0126 10:59:58.044465 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 10:59:58 crc kubenswrapper[4619]: I0126 10:59:58.092760 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 10:59:58 crc kubenswrapper[4619]: I0126 10:59:58.225544 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 10:59:58 crc kubenswrapper[4619]: I0126 10:59:58.314334 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 10:59:58 crc kubenswrapper[4619]: I0126 10:59:58.329014 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 10:59:58 crc kubenswrapper[4619]: I0126 10:59:58.497335 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 10:59:58 crc kubenswrapper[4619]: I0126 10:59:58.627214 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 10:59:58 crc kubenswrapper[4619]: I0126 10:59:58.713924 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 10:59:58 crc kubenswrapper[4619]: I0126 10:59:58.935385 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 10:59:58 crc kubenswrapper[4619]: I0126 10:59:58.954710 4619 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 10:59:59 crc kubenswrapper[4619]: I0126 10:59:59.064602 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 10:59:59 crc kubenswrapper[4619]: I0126 10:59:59.078565 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 10:59:59 crc kubenswrapper[4619]: I0126 10:59:59.920047 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 10:59:59 crc kubenswrapper[4619]: I0126 10:59:59.930304 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 11:00:01 crc kubenswrapper[4619]: I0126 11:00:01.130870 4619 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 26 11:00:01 crc kubenswrapper[4619]: I0126 11:00:01.895221 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 11:00:01 crc kubenswrapper[4619]: I0126 11:00:01.895873 4619 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="5aa41818a55e10bc883950638bbf126ebf2925aa4b8fbe685ed139998df17277" exitCode=137 Jan 26 11:00:01 crc kubenswrapper[4619]: I0126 11:00:01.964571 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 11:00:01 crc kubenswrapper[4619]: I0126 11:00:01.964740 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 11:00:02 crc kubenswrapper[4619]: I0126 11:00:02.075580 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 11:00:02 crc kubenswrapper[4619]: I0126 11:00:02.075680 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 11:00:02 crc kubenswrapper[4619]: I0126 11:00:02.075714 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 11:00:02 crc kubenswrapper[4619]: I0126 11:00:02.075737 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 11:00:02 crc kubenswrapper[4619]: I0126 11:00:02.075770 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:00:02 crc kubenswrapper[4619]: I0126 11:00:02.075819 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 11:00:02 crc kubenswrapper[4619]: I0126 11:00:02.075918 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:00:02 crc kubenswrapper[4619]: I0126 11:00:02.076750 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:00:02 crc kubenswrapper[4619]: I0126 11:00:02.076849 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:00:02 crc kubenswrapper[4619]: I0126 11:00:02.076913 4619 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:02 crc kubenswrapper[4619]: I0126 11:00:02.076942 4619 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:02 crc kubenswrapper[4619]: I0126 11:00:02.076960 4619 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:02 crc kubenswrapper[4619]: I0126 11:00:02.089605 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:00:02 crc kubenswrapper[4619]: I0126 11:00:02.178634 4619 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:02 crc kubenswrapper[4619]: I0126 11:00:02.178685 4619 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:02 crc kubenswrapper[4619]: I0126 11:00:02.904444 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 11:00:02 crc kubenswrapper[4619]: I0126 11:00:02.904677 4619 scope.go:117] "RemoveContainer" containerID="5aa41818a55e10bc883950638bbf126ebf2925aa4b8fbe685ed139998df17277" Jan 26 11:00:02 crc kubenswrapper[4619]: I0126 11:00:02.904729 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 11:00:03 crc kubenswrapper[4619]: I0126 11:00:03.269433 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 26 11:00:09 crc kubenswrapper[4619]: I0126 11:00:09.105219 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490420-7xdt2"] Jan 26 11:00:09 crc kubenswrapper[4619]: E0126 11:00:09.105434 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 11:00:09 crc kubenswrapper[4619]: I0126 11:00:09.105446 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 11:00:09 crc kubenswrapper[4619]: I0126 11:00:09.105550 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 11:00:09 crc kubenswrapper[4619]: I0126 11:00:09.105939 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490420-7xdt2" Jan 26 11:00:09 crc kubenswrapper[4619]: I0126 11:00:09.108598 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 11:00:09 crc kubenswrapper[4619]: I0126 11:00:09.108819 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 11:00:09 crc kubenswrapper[4619]: I0126 11:00:09.122540 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490420-7xdt2"] Jan 26 11:00:09 crc kubenswrapper[4619]: I0126 11:00:09.264875 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9985263d-2046-4641-b6cb-235bc8403d32-secret-volume\") pod \"collect-profiles-29490420-7xdt2\" (UID: \"9985263d-2046-4641-b6cb-235bc8403d32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490420-7xdt2" Jan 26 11:00:09 crc kubenswrapper[4619]: I0126 11:00:09.265183 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9985263d-2046-4641-b6cb-235bc8403d32-config-volume\") pod \"collect-profiles-29490420-7xdt2\" (UID: \"9985263d-2046-4641-b6cb-235bc8403d32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490420-7xdt2" Jan 26 11:00:09 crc kubenswrapper[4619]: I0126 11:00:09.265312 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-595qx\" (UniqueName: \"kubernetes.io/projected/9985263d-2046-4641-b6cb-235bc8403d32-kube-api-access-595qx\") pod \"collect-profiles-29490420-7xdt2\" (UID: \"9985263d-2046-4641-b6cb-235bc8403d32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490420-7xdt2" Jan 26 11:00:09 crc kubenswrapper[4619]: I0126 11:00:09.366406 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9985263d-2046-4641-b6cb-235bc8403d32-secret-volume\") pod \"collect-profiles-29490420-7xdt2\" (UID: \"9985263d-2046-4641-b6cb-235bc8403d32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490420-7xdt2" Jan 26 11:00:09 crc kubenswrapper[4619]: I0126 11:00:09.366480 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9985263d-2046-4641-b6cb-235bc8403d32-config-volume\") pod \"collect-profiles-29490420-7xdt2\" (UID: \"9985263d-2046-4641-b6cb-235bc8403d32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490420-7xdt2" Jan 26 11:00:09 crc kubenswrapper[4619]: I0126 11:00:09.366496 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-595qx\" (UniqueName: \"kubernetes.io/projected/9985263d-2046-4641-b6cb-235bc8403d32-kube-api-access-595qx\") pod \"collect-profiles-29490420-7xdt2\" (UID: \"9985263d-2046-4641-b6cb-235bc8403d32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490420-7xdt2" Jan 26 11:00:09 crc kubenswrapper[4619]: I0126 11:00:09.367971 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9985263d-2046-4641-b6cb-235bc8403d32-config-volume\") pod \"collect-profiles-29490420-7xdt2\" (UID: \"9985263d-2046-4641-b6cb-235bc8403d32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490420-7xdt2" Jan 26 11:00:09 crc kubenswrapper[4619]: I0126 11:00:09.373547 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9985263d-2046-4641-b6cb-235bc8403d32-secret-volume\") pod \"collect-profiles-29490420-7xdt2\" (UID: \"9985263d-2046-4641-b6cb-235bc8403d32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490420-7xdt2" Jan 26 11:00:09 crc kubenswrapper[4619]: I0126 11:00:09.385482 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-595qx\" (UniqueName: \"kubernetes.io/projected/9985263d-2046-4641-b6cb-235bc8403d32-kube-api-access-595qx\") pod \"collect-profiles-29490420-7xdt2\" (UID: \"9985263d-2046-4641-b6cb-235bc8403d32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490420-7xdt2" Jan 26 11:00:09 crc kubenswrapper[4619]: I0126 11:00:09.426111 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490420-7xdt2" Jan 26 11:00:09 crc kubenswrapper[4619]: I0126 11:00:09.736446 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490420-7xdt2"] Jan 26 11:00:09 crc kubenswrapper[4619]: W0126 11:00:09.746426 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9985263d_2046_4641_b6cb_235bc8403d32.slice/crio-98db701b47b66a4b0e40cfb260412c44f1640c10e79c437e43c07bf78e2e58cd WatchSource:0}: Error finding container 98db701b47b66a4b0e40cfb260412c44f1640c10e79c437e43c07bf78e2e58cd: Status 404 returned error can't find the container with id 98db701b47b66a4b0e40cfb260412c44f1640c10e79c437e43c07bf78e2e58cd Jan 26 11:00:09 crc kubenswrapper[4619]: I0126 11:00:09.940236 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490420-7xdt2" event={"ID":"9985263d-2046-4641-b6cb-235bc8403d32","Type":"ContainerStarted","Data":"b6e66f2075e14cf18463c0dd36480aef651f2ea230914606076060d9762e255d"} Jan 26 11:00:09 crc kubenswrapper[4619]: I0126 11:00:09.940551 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490420-7xdt2" event={"ID":"9985263d-2046-4641-b6cb-235bc8403d32","Type":"ContainerStarted","Data":"98db701b47b66a4b0e40cfb260412c44f1640c10e79c437e43c07bf78e2e58cd"} Jan 26 11:00:09 crc kubenswrapper[4619]: I0126 11:00:09.961372 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490420-7xdt2" podStartSLOduration=0.961350011 podStartE2EDuration="961.350011ms" podCreationTimestamp="2026-01-26 11:00:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:00:09.957514325 +0000 UTC m=+308.991555051" watchObservedRunningTime="2026-01-26 11:00:09.961350011 +0000 UTC m=+308.995390717" Jan 26 11:00:10 crc kubenswrapper[4619]: I0126 11:00:10.945772 4619 generic.go:334] "Generic (PLEG): container finished" podID="9985263d-2046-4641-b6cb-235bc8403d32" containerID="b6e66f2075e14cf18463c0dd36480aef651f2ea230914606076060d9762e255d" exitCode=0 Jan 26 11:00:10 crc kubenswrapper[4619]: I0126 11:00:10.945874 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490420-7xdt2" event={"ID":"9985263d-2046-4641-b6cb-235bc8403d32","Type":"ContainerDied","Data":"b6e66f2075e14cf18463c0dd36480aef651f2ea230914606076060d9762e255d"} Jan 26 11:00:12 crc kubenswrapper[4619]: I0126 11:00:12.154278 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490420-7xdt2" Jan 26 11:00:12 crc kubenswrapper[4619]: I0126 11:00:12.309153 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9985263d-2046-4641-b6cb-235bc8403d32-secret-volume\") pod \"9985263d-2046-4641-b6cb-235bc8403d32\" (UID: \"9985263d-2046-4641-b6cb-235bc8403d32\") " Jan 26 11:00:12 crc kubenswrapper[4619]: I0126 11:00:12.309221 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9985263d-2046-4641-b6cb-235bc8403d32-config-volume\") pod \"9985263d-2046-4641-b6cb-235bc8403d32\" (UID: \"9985263d-2046-4641-b6cb-235bc8403d32\") " Jan 26 11:00:12 crc kubenswrapper[4619]: I0126 11:00:12.309254 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-595qx\" (UniqueName: \"kubernetes.io/projected/9985263d-2046-4641-b6cb-235bc8403d32-kube-api-access-595qx\") pod \"9985263d-2046-4641-b6cb-235bc8403d32\" (UID: \"9985263d-2046-4641-b6cb-235bc8403d32\") " Jan 26 11:00:12 crc kubenswrapper[4619]: I0126 11:00:12.310954 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9985263d-2046-4641-b6cb-235bc8403d32-config-volume" (OuterVolumeSpecName: "config-volume") pod "9985263d-2046-4641-b6cb-235bc8403d32" (UID: "9985263d-2046-4641-b6cb-235bc8403d32"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:00:12 crc kubenswrapper[4619]: I0126 11:00:12.314952 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9985263d-2046-4641-b6cb-235bc8403d32-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9985263d-2046-4641-b6cb-235bc8403d32" (UID: "9985263d-2046-4641-b6cb-235bc8403d32"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:00:12 crc kubenswrapper[4619]: I0126 11:00:12.317808 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9985263d-2046-4641-b6cb-235bc8403d32-kube-api-access-595qx" (OuterVolumeSpecName: "kube-api-access-595qx") pod "9985263d-2046-4641-b6cb-235bc8403d32" (UID: "9985263d-2046-4641-b6cb-235bc8403d32"). InnerVolumeSpecName "kube-api-access-595qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:00:12 crc kubenswrapper[4619]: I0126 11:00:12.410371 4619 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9985263d-2046-4641-b6cb-235bc8403d32-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:12 crc kubenswrapper[4619]: I0126 11:00:12.410413 4619 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9985263d-2046-4641-b6cb-235bc8403d32-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:12 crc kubenswrapper[4619]: I0126 11:00:12.410423 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-595qx\" (UniqueName: \"kubernetes.io/projected/9985263d-2046-4641-b6cb-235bc8403d32-kube-api-access-595qx\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:12 crc kubenswrapper[4619]: I0126 11:00:12.960142 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490420-7xdt2" event={"ID":"9985263d-2046-4641-b6cb-235bc8403d32","Type":"ContainerDied","Data":"98db701b47b66a4b0e40cfb260412c44f1640c10e79c437e43c07bf78e2e58cd"} Jan 26 11:00:12 crc kubenswrapper[4619]: I0126 11:00:12.960217 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98db701b47b66a4b0e40cfb260412c44f1640c10e79c437e43c07bf78e2e58cd" Jan 26 11:00:12 crc kubenswrapper[4619]: I0126 11:00:12.960166 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490420-7xdt2" Jan 26 11:00:17 crc kubenswrapper[4619]: I0126 11:00:17.987527 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 26 11:00:17 crc kubenswrapper[4619]: I0126 11:00:17.998762 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 11:00:17 crc kubenswrapper[4619]: I0126 11:00:17.998916 4619 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e1747897c0a0e47d64b70ffe3a4490f079d5cb8efbffdadf44690f1e7e384f24" exitCode=137 Jan 26 11:00:17 crc kubenswrapper[4619]: I0126 11:00:17.999033 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e1747897c0a0e47d64b70ffe3a4490f079d5cb8efbffdadf44690f1e7e384f24"} Jan 26 11:00:17 crc kubenswrapper[4619]: I0126 11:00:17.999132 4619 scope.go:117] "RemoveContainer" containerID="74c64349213772f7e31f4e2db377e18667841bdd8958a0a3f514e743497d6eed" Jan 26 11:00:18 crc kubenswrapper[4619]: I0126 11:00:18.565891 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bsqjj"] Jan 26 11:00:18 crc kubenswrapper[4619]: I0126 11:00:18.566370 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bsqjj" podUID="7a310950-4656-4955-b453-e846f29f47d8" containerName="registry-server" containerID="cri-o://6366c742031d44d86b86104126962062256e017a1312a3c1c6108b4f94b19f94" gracePeriod=30 Jan 26 11:00:18 crc kubenswrapper[4619]: I0126 11:00:18.574352 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zlrvs"] Jan 26 11:00:18 crc kubenswrapper[4619]: I0126 11:00:18.574767 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zlrvs" podUID="722ff7bc-6563-4562-b96f-430b1b2fedd1" containerName="registry-server" containerID="cri-o://d88bfce001571cbcc69a65143cbf6b4836cbee73bcdfeb94194e87948b7acfbf" gracePeriod=30 Jan 26 11:00:18 crc kubenswrapper[4619]: I0126 11:00:18.582433 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pn8z7"] Jan 26 11:00:18 crc kubenswrapper[4619]: I0126 11:00:18.582693 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-pn8z7" podUID="87b07f24-92c0-4190-a140-6029e82f826d" containerName="marketplace-operator" containerID="cri-o://ce4c36d437cf45801a08ec73cbf0308c61fc5c6863a67c44e647de5c0acd2a4e" gracePeriod=30 Jan 26 11:00:18 crc kubenswrapper[4619]: I0126 11:00:18.594969 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvhzw"] Jan 26 11:00:18 crc kubenswrapper[4619]: I0126 11:00:18.595264 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tvhzw" podUID="3d6d6055-6441-4f47-8107-8886901691cc" containerName="registry-server" containerID="cri-o://7d80ba93fa7dfba3ed6351b2d1b60d5484ea2c7bab0a21b8e508cc8184092f47" gracePeriod=30 Jan 26 11:00:18 crc kubenswrapper[4619]: I0126 11:00:18.606211 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7mntj"] Jan 26 11:00:18 crc kubenswrapper[4619]: I0126 11:00:18.606530 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7mntj" podUID="74eaac61-fd26-4596-ab74-d1282c0baf2b" containerName="registry-server" containerID="cri-o://968d33e5def046d9d75f7d642b861e9fb5e10fa0b069db29f9cca1fdb699290e" gracePeriod=30 Jan 26 11:00:18 crc kubenswrapper[4619]: I0126 11:00:18.927701 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bsqjj" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.003688 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pn8z7" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.016296 4619 generic.go:334] "Generic (PLEG): container finished" podID="722ff7bc-6563-4562-b96f-430b1b2fedd1" containerID="d88bfce001571cbcc69a65143cbf6b4836cbee73bcdfeb94194e87948b7acfbf" exitCode=0 Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.016395 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlrvs" event={"ID":"722ff7bc-6563-4562-b96f-430b1b2fedd1","Type":"ContainerDied","Data":"d88bfce001571cbcc69a65143cbf6b4836cbee73bcdfeb94194e87948b7acfbf"} Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.019937 4619 generic.go:334] "Generic (PLEG): container finished" podID="7a310950-4656-4955-b453-e846f29f47d8" containerID="6366c742031d44d86b86104126962062256e017a1312a3c1c6108b4f94b19f94" exitCode=0 Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.020008 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsqjj" event={"ID":"7a310950-4656-4955-b453-e846f29f47d8","Type":"ContainerDied","Data":"6366c742031d44d86b86104126962062256e017a1312a3c1c6108b4f94b19f94"} Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.020040 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsqjj" event={"ID":"7a310950-4656-4955-b453-e846f29f47d8","Type":"ContainerDied","Data":"e61163909f1d35dd1826224e57970a32f58185d9928a2c2caf74791612ab42ae"} Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.020064 4619 scope.go:117] "RemoveContainer" containerID="6366c742031d44d86b86104126962062256e017a1312a3c1c6108b4f94b19f94" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.020179 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bsqjj" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.030843 4619 generic.go:334] "Generic (PLEG): container finished" podID="3d6d6055-6441-4f47-8107-8886901691cc" containerID="7d80ba93fa7dfba3ed6351b2d1b60d5484ea2c7bab0a21b8e508cc8184092f47" exitCode=0 Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.030905 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvhzw" event={"ID":"3d6d6055-6441-4f47-8107-8886901691cc","Type":"ContainerDied","Data":"7d80ba93fa7dfba3ed6351b2d1b60d5484ea2c7bab0a21b8e508cc8184092f47"} Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.036645 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.037578 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ff6a06f45f84c3fab78646a41fe8fa405a10818d36e33db494142de26bb7703c"} Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.048346 4619 scope.go:117] "RemoveContainer" containerID="ca95cf416677ce9902bd2ff6a9511be001014a64daa012fef265df83ef65cc66" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.048578 4619 generic.go:334] "Generic (PLEG): container finished" podID="87b07f24-92c0-4190-a140-6029e82f826d" containerID="ce4c36d437cf45801a08ec73cbf0308c61fc5c6863a67c44e647de5c0acd2a4e" exitCode=0 Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.048683 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pn8z7" event={"ID":"87b07f24-92c0-4190-a140-6029e82f826d","Type":"ContainerDied","Data":"ce4c36d437cf45801a08ec73cbf0308c61fc5c6863a67c44e647de5c0acd2a4e"} Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.048713 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pn8z7" event={"ID":"87b07f24-92c0-4190-a140-6029e82f826d","Type":"ContainerDied","Data":"d4c3767e98af82085b2d52fb095b9e17826f51456a95bae8d66fcd80dd0007c6"} Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.048771 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pn8z7" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.067650 4619 generic.go:334] "Generic (PLEG): container finished" podID="74eaac61-fd26-4596-ab74-d1282c0baf2b" containerID="968d33e5def046d9d75f7d642b861e9fb5e10fa0b069db29f9cca1fdb699290e" exitCode=0 Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.067721 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mntj" event={"ID":"74eaac61-fd26-4596-ab74-d1282c0baf2b","Type":"ContainerDied","Data":"968d33e5def046d9d75f7d642b861e9fb5e10fa0b069db29f9cca1fdb699290e"} Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.084383 4619 scope.go:117] "RemoveContainer" containerID="935ffbe8d6b2832ba39d846d1983290e965f0b9cd5bd7843351462bd82362e60" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.094042 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbk9q\" (UniqueName: \"kubernetes.io/projected/7a310950-4656-4955-b453-e846f29f47d8-kube-api-access-sbk9q\") pod \"7a310950-4656-4955-b453-e846f29f47d8\" (UID: \"7a310950-4656-4955-b453-e846f29f47d8\") " Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.094094 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/87b07f24-92c0-4190-a140-6029e82f826d-marketplace-operator-metrics\") pod \"87b07f24-92c0-4190-a140-6029e82f826d\" (UID: \"87b07f24-92c0-4190-a140-6029e82f826d\") " Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.094145 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a310950-4656-4955-b453-e846f29f47d8-utilities\") pod \"7a310950-4656-4955-b453-e846f29f47d8\" (UID: \"7a310950-4656-4955-b453-e846f29f47d8\") " Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.094172 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a310950-4656-4955-b453-e846f29f47d8-catalog-content\") pod \"7a310950-4656-4955-b453-e846f29f47d8\" (UID: \"7a310950-4656-4955-b453-e846f29f47d8\") " Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.094206 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qtdv\" (UniqueName: \"kubernetes.io/projected/87b07f24-92c0-4190-a140-6029e82f826d-kube-api-access-6qtdv\") pod \"87b07f24-92c0-4190-a140-6029e82f826d\" (UID: \"87b07f24-92c0-4190-a140-6029e82f826d\") " Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.094226 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87b07f24-92c0-4190-a140-6029e82f826d-marketplace-trusted-ca\") pod \"87b07f24-92c0-4190-a140-6029e82f826d\" (UID: \"87b07f24-92c0-4190-a140-6029e82f826d\") " Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.095225 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87b07f24-92c0-4190-a140-6029e82f826d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "87b07f24-92c0-4190-a140-6029e82f826d" (UID: "87b07f24-92c0-4190-a140-6029e82f826d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.095585 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a310950-4656-4955-b453-e846f29f47d8-utilities" (OuterVolumeSpecName: "utilities") pod "7a310950-4656-4955-b453-e846f29f47d8" (UID: "7a310950-4656-4955-b453-e846f29f47d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.110924 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a310950-4656-4955-b453-e846f29f47d8-kube-api-access-sbk9q" (OuterVolumeSpecName: "kube-api-access-sbk9q") pod "7a310950-4656-4955-b453-e846f29f47d8" (UID: "7a310950-4656-4955-b453-e846f29f47d8"). InnerVolumeSpecName "kube-api-access-sbk9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.111049 4619 scope.go:117] "RemoveContainer" containerID="6366c742031d44d86b86104126962062256e017a1312a3c1c6108b4f94b19f94" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.111344 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87b07f24-92c0-4190-a140-6029e82f826d-kube-api-access-6qtdv" (OuterVolumeSpecName: "kube-api-access-6qtdv") pod "87b07f24-92c0-4190-a140-6029e82f826d" (UID: "87b07f24-92c0-4190-a140-6029e82f826d"). InnerVolumeSpecName "kube-api-access-6qtdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:00:19 crc kubenswrapper[4619]: E0126 11:00:19.111794 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6366c742031d44d86b86104126962062256e017a1312a3c1c6108b4f94b19f94\": container with ID starting with 6366c742031d44d86b86104126962062256e017a1312a3c1c6108b4f94b19f94 not found: ID does not exist" containerID="6366c742031d44d86b86104126962062256e017a1312a3c1c6108b4f94b19f94" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.111832 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6366c742031d44d86b86104126962062256e017a1312a3c1c6108b4f94b19f94"} err="failed to get container status \"6366c742031d44d86b86104126962062256e017a1312a3c1c6108b4f94b19f94\": rpc error: code = NotFound desc = could not find container \"6366c742031d44d86b86104126962062256e017a1312a3c1c6108b4f94b19f94\": container with ID starting with 6366c742031d44d86b86104126962062256e017a1312a3c1c6108b4f94b19f94 not found: ID does not exist" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.111857 4619 scope.go:117] "RemoveContainer" containerID="ca95cf416677ce9902bd2ff6a9511be001014a64daa012fef265df83ef65cc66" Jan 26 11:00:19 crc kubenswrapper[4619]: E0126 11:00:19.112449 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca95cf416677ce9902bd2ff6a9511be001014a64daa012fef265df83ef65cc66\": container with ID starting with ca95cf416677ce9902bd2ff6a9511be001014a64daa012fef265df83ef65cc66 not found: ID does not exist" containerID="ca95cf416677ce9902bd2ff6a9511be001014a64daa012fef265df83ef65cc66" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.112468 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca95cf416677ce9902bd2ff6a9511be001014a64daa012fef265df83ef65cc66"} err="failed to get container status \"ca95cf416677ce9902bd2ff6a9511be001014a64daa012fef265df83ef65cc66\": rpc error: code = NotFound desc = could not find container \"ca95cf416677ce9902bd2ff6a9511be001014a64daa012fef265df83ef65cc66\": container with ID starting with ca95cf416677ce9902bd2ff6a9511be001014a64daa012fef265df83ef65cc66 not found: ID does not exist" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.112482 4619 scope.go:117] "RemoveContainer" containerID="935ffbe8d6b2832ba39d846d1983290e965f0b9cd5bd7843351462bd82362e60" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.112545 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87b07f24-92c0-4190-a140-6029e82f826d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "87b07f24-92c0-4190-a140-6029e82f826d" (UID: "87b07f24-92c0-4190-a140-6029e82f826d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:00:19 crc kubenswrapper[4619]: E0126 11:00:19.112851 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"935ffbe8d6b2832ba39d846d1983290e965f0b9cd5bd7843351462bd82362e60\": container with ID starting with 935ffbe8d6b2832ba39d846d1983290e965f0b9cd5bd7843351462bd82362e60 not found: ID does not exist" containerID="935ffbe8d6b2832ba39d846d1983290e965f0b9cd5bd7843351462bd82362e60" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.112911 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"935ffbe8d6b2832ba39d846d1983290e965f0b9cd5bd7843351462bd82362e60"} err="failed to get container status \"935ffbe8d6b2832ba39d846d1983290e965f0b9cd5bd7843351462bd82362e60\": rpc error: code = NotFound desc = could not find container \"935ffbe8d6b2832ba39d846d1983290e965f0b9cd5bd7843351462bd82362e60\": container with ID starting with 935ffbe8d6b2832ba39d846d1983290e965f0b9cd5bd7843351462bd82362e60 not found: ID does not exist" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.112946 4619 scope.go:117] "RemoveContainer" containerID="ce4c36d437cf45801a08ec73cbf0308c61fc5c6863a67c44e647de5c0acd2a4e" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.114337 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zlrvs" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.124765 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tvhzw" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.134814 4619 scope.go:117] "RemoveContainer" containerID="ce4c36d437cf45801a08ec73cbf0308c61fc5c6863a67c44e647de5c0acd2a4e" Jan 26 11:00:19 crc kubenswrapper[4619]: E0126 11:00:19.135826 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce4c36d437cf45801a08ec73cbf0308c61fc5c6863a67c44e647de5c0acd2a4e\": container with ID starting with ce4c36d437cf45801a08ec73cbf0308c61fc5c6863a67c44e647de5c0acd2a4e not found: ID does not exist" containerID="ce4c36d437cf45801a08ec73cbf0308c61fc5c6863a67c44e647de5c0acd2a4e" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.135877 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce4c36d437cf45801a08ec73cbf0308c61fc5c6863a67c44e647de5c0acd2a4e"} err="failed to get container status \"ce4c36d437cf45801a08ec73cbf0308c61fc5c6863a67c44e647de5c0acd2a4e\": rpc error: code = NotFound desc = could not find container \"ce4c36d437cf45801a08ec73cbf0308c61fc5c6863a67c44e647de5c0acd2a4e\": container with ID starting with ce4c36d437cf45801a08ec73cbf0308c61fc5c6863a67c44e647de5c0acd2a4e not found: ID does not exist" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.140431 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7mntj" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.145506 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a310950-4656-4955-b453-e846f29f47d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a310950-4656-4955-b453-e846f29f47d8" (UID: "7a310950-4656-4955-b453-e846f29f47d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.196697 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbk9q\" (UniqueName: \"kubernetes.io/projected/7a310950-4656-4955-b453-e846f29f47d8-kube-api-access-sbk9q\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.196745 4619 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/87b07f24-92c0-4190-a140-6029e82f826d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.196757 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a310950-4656-4955-b453-e846f29f47d8-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.196768 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a310950-4656-4955-b453-e846f29f47d8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.196778 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qtdv\" (UniqueName: \"kubernetes.io/projected/87b07f24-92c0-4190-a140-6029e82f826d-kube-api-access-6qtdv\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.196792 4619 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87b07f24-92c0-4190-a140-6029e82f826d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.297579 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/722ff7bc-6563-4562-b96f-430b1b2fedd1-utilities\") pod \"722ff7bc-6563-4562-b96f-430b1b2fedd1\" (UID: \"722ff7bc-6563-4562-b96f-430b1b2fedd1\") " Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.297639 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4ql9\" (UniqueName: \"kubernetes.io/projected/74eaac61-fd26-4596-ab74-d1282c0baf2b-kube-api-access-f4ql9\") pod \"74eaac61-fd26-4596-ab74-d1282c0baf2b\" (UID: \"74eaac61-fd26-4596-ab74-d1282c0baf2b\") " Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.297659 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74eaac61-fd26-4596-ab74-d1282c0baf2b-catalog-content\") pod \"74eaac61-fd26-4596-ab74-d1282c0baf2b\" (UID: \"74eaac61-fd26-4596-ab74-d1282c0baf2b\") " Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.297688 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/722ff7bc-6563-4562-b96f-430b1b2fedd1-catalog-content\") pod \"722ff7bc-6563-4562-b96f-430b1b2fedd1\" (UID: \"722ff7bc-6563-4562-b96f-430b1b2fedd1\") " Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.297747 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d6d6055-6441-4f47-8107-8886901691cc-catalog-content\") pod \"3d6d6055-6441-4f47-8107-8886901691cc\" (UID: \"3d6d6055-6441-4f47-8107-8886901691cc\") " Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.297766 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8wgx\" (UniqueName: \"kubernetes.io/projected/722ff7bc-6563-4562-b96f-430b1b2fedd1-kube-api-access-x8wgx\") pod \"722ff7bc-6563-4562-b96f-430b1b2fedd1\" (UID: \"722ff7bc-6563-4562-b96f-430b1b2fedd1\") " Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.297781 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d6d6055-6441-4f47-8107-8886901691cc-utilities\") pod \"3d6d6055-6441-4f47-8107-8886901691cc\" (UID: \"3d6d6055-6441-4f47-8107-8886901691cc\") " Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.297809 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-467ds\" (UniqueName: \"kubernetes.io/projected/3d6d6055-6441-4f47-8107-8886901691cc-kube-api-access-467ds\") pod \"3d6d6055-6441-4f47-8107-8886901691cc\" (UID: \"3d6d6055-6441-4f47-8107-8886901691cc\") " Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.297826 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74eaac61-fd26-4596-ab74-d1282c0baf2b-utilities\") pod \"74eaac61-fd26-4596-ab74-d1282c0baf2b\" (UID: \"74eaac61-fd26-4596-ab74-d1282c0baf2b\") " Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.299444 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74eaac61-fd26-4596-ab74-d1282c0baf2b-utilities" (OuterVolumeSpecName: "utilities") pod "74eaac61-fd26-4596-ab74-d1282c0baf2b" (UID: "74eaac61-fd26-4596-ab74-d1282c0baf2b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.302437 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/722ff7bc-6563-4562-b96f-430b1b2fedd1-kube-api-access-x8wgx" (OuterVolumeSpecName: "kube-api-access-x8wgx") pod "722ff7bc-6563-4562-b96f-430b1b2fedd1" (UID: "722ff7bc-6563-4562-b96f-430b1b2fedd1"). InnerVolumeSpecName "kube-api-access-x8wgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.302497 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74eaac61-fd26-4596-ab74-d1282c0baf2b-kube-api-access-f4ql9" (OuterVolumeSpecName: "kube-api-access-f4ql9") pod "74eaac61-fd26-4596-ab74-d1282c0baf2b" (UID: "74eaac61-fd26-4596-ab74-d1282c0baf2b"). InnerVolumeSpecName "kube-api-access-f4ql9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.303349 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d6d6055-6441-4f47-8107-8886901691cc-utilities" (OuterVolumeSpecName: "utilities") pod "3d6d6055-6441-4f47-8107-8886901691cc" (UID: "3d6d6055-6441-4f47-8107-8886901691cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.304348 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/722ff7bc-6563-4562-b96f-430b1b2fedd1-utilities" (OuterVolumeSpecName: "utilities") pod "722ff7bc-6563-4562-b96f-430b1b2fedd1" (UID: "722ff7bc-6563-4562-b96f-430b1b2fedd1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.316129 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d6d6055-6441-4f47-8107-8886901691cc-kube-api-access-467ds" (OuterVolumeSpecName: "kube-api-access-467ds") pod "3d6d6055-6441-4f47-8107-8886901691cc" (UID: "3d6d6055-6441-4f47-8107-8886901691cc"). InnerVolumeSpecName "kube-api-access-467ds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.333318 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d6d6055-6441-4f47-8107-8886901691cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d6d6055-6441-4f47-8107-8886901691cc" (UID: "3d6d6055-6441-4f47-8107-8886901691cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.336719 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bsqjj"] Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.339642 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bsqjj"] Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.363499 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/722ff7bc-6563-4562-b96f-430b1b2fedd1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "722ff7bc-6563-4562-b96f-430b1b2fedd1" (UID: "722ff7bc-6563-4562-b96f-430b1b2fedd1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.368538 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pn8z7"] Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.373199 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pn8z7"] Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.399725 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d6d6055-6441-4f47-8107-8886901691cc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.399964 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8wgx\" (UniqueName: \"kubernetes.io/projected/722ff7bc-6563-4562-b96f-430b1b2fedd1-kube-api-access-x8wgx\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.400090 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d6d6055-6441-4f47-8107-8886901691cc-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.400154 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-467ds\" (UniqueName: \"kubernetes.io/projected/3d6d6055-6441-4f47-8107-8886901691cc-kube-api-access-467ds\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.400219 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74eaac61-fd26-4596-ab74-d1282c0baf2b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.400285 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/722ff7bc-6563-4562-b96f-430b1b2fedd1-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.400342 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4ql9\" (UniqueName: \"kubernetes.io/projected/74eaac61-fd26-4596-ab74-d1282c0baf2b-kube-api-access-f4ql9\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.400406 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/722ff7bc-6563-4562-b96f-430b1b2fedd1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.425903 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74eaac61-fd26-4596-ab74-d1282c0baf2b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74eaac61-fd26-4596-ab74-d1282c0baf2b" (UID: "74eaac61-fd26-4596-ab74-d1282c0baf2b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:00:19 crc kubenswrapper[4619]: I0126 11:00:19.501062 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74eaac61-fd26-4596-ab74-d1282c0baf2b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:20 crc kubenswrapper[4619]: I0126 11:00:20.079917 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlrvs" event={"ID":"722ff7bc-6563-4562-b96f-430b1b2fedd1","Type":"ContainerDied","Data":"68b37fcb6502fa077dcbdeac280c3b9642ddf02e4dc95bcc4983b5101511a2a4"} Jan 26 11:00:20 crc kubenswrapper[4619]: I0126 11:00:20.079997 4619 scope.go:117] "RemoveContainer" containerID="d88bfce001571cbcc69a65143cbf6b4836cbee73bcdfeb94194e87948b7acfbf" Jan 26 11:00:20 crc kubenswrapper[4619]: I0126 11:00:20.080766 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zlrvs" Jan 26 11:00:20 crc kubenswrapper[4619]: I0126 11:00:20.084648 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tvhzw" Jan 26 11:00:20 crc kubenswrapper[4619]: I0126 11:00:20.084639 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvhzw" event={"ID":"3d6d6055-6441-4f47-8107-8886901691cc","Type":"ContainerDied","Data":"0e197741cbab0b371c21d5646975244bb96d819a27fb0311e64ecead88b98073"} Jan 26 11:00:20 crc kubenswrapper[4619]: I0126 11:00:20.089197 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7mntj" Jan 26 11:00:20 crc kubenswrapper[4619]: I0126 11:00:20.089188 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mntj" event={"ID":"74eaac61-fd26-4596-ab74-d1282c0baf2b","Type":"ContainerDied","Data":"294f4a96ede982e10f8da8776e90ef9d5fb7d03e4776d5c267fc0104969287f7"} Jan 26 11:00:20 crc kubenswrapper[4619]: I0126 11:00:20.118814 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zlrvs"] Jan 26 11:00:20 crc kubenswrapper[4619]: I0126 11:00:20.119075 4619 scope.go:117] "RemoveContainer" containerID="7383259c69cf74f769fcc835d0d4c6d450611f9d01309c4ef2f1101e410ba6aa" Jan 26 11:00:20 crc kubenswrapper[4619]: I0126 11:00:20.126847 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zlrvs"] Jan 26 11:00:20 crc kubenswrapper[4619]: I0126 11:00:20.140678 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7mntj"] Jan 26 11:00:20 crc kubenswrapper[4619]: I0126 11:00:20.142124 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7mntj"] Jan 26 11:00:20 crc kubenswrapper[4619]: I0126 11:00:20.152848 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvhzw"] Jan 26 11:00:20 crc kubenswrapper[4619]: I0126 11:00:20.156790 4619 scope.go:117] "RemoveContainer" containerID="98f7138793e6eabd97db1f0d370a7730d72666200d77bec5af46a885759596c7" Jan 26 11:00:20 crc kubenswrapper[4619]: I0126 11:00:20.157717 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvhzw"] Jan 26 11:00:20 crc kubenswrapper[4619]: I0126 11:00:20.170026 4619 scope.go:117] "RemoveContainer" containerID="7d80ba93fa7dfba3ed6351b2d1b60d5484ea2c7bab0a21b8e508cc8184092f47" Jan 26 11:00:20 crc kubenswrapper[4619]: I0126 11:00:20.185537 4619 scope.go:117] "RemoveContainer" containerID="59f7945f3dab4bd31c278b496328a70387f7e8edaea69a7a270820e9fe7a352d" Jan 26 11:00:20 crc kubenswrapper[4619]: I0126 11:00:20.198260 4619 scope.go:117] "RemoveContainer" containerID="ddc23e268cac64005f25ea547210357012aa148d4eef396bdd0c1a796982b84e" Jan 26 11:00:20 crc kubenswrapper[4619]: I0126 11:00:20.213015 4619 scope.go:117] "RemoveContainer" containerID="968d33e5def046d9d75f7d642b861e9fb5e10fa0b069db29f9cca1fdb699290e" Jan 26 11:00:20 crc kubenswrapper[4619]: I0126 11:00:20.243051 4619 scope.go:117] "RemoveContainer" containerID="49c4c4b9437444610dd717888f8f37fcbc12fab3c5f8d912ca2832c6df6f31f5" Jan 26 11:00:20 crc kubenswrapper[4619]: I0126 11:00:20.268258 4619 scope.go:117] "RemoveContainer" containerID="2034dde0b37929f9d4a247913e23c94c4ca038f68fcc455e5c5dbb831cc7fb68" Jan 26 11:00:21 crc kubenswrapper[4619]: I0126 11:00:21.271204 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d6d6055-6441-4f47-8107-8886901691cc" path="/var/lib/kubelet/pods/3d6d6055-6441-4f47-8107-8886901691cc/volumes" Jan 26 11:00:21 crc kubenswrapper[4619]: I0126 11:00:21.272085 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="722ff7bc-6563-4562-b96f-430b1b2fedd1" path="/var/lib/kubelet/pods/722ff7bc-6563-4562-b96f-430b1b2fedd1/volumes" Jan 26 11:00:21 crc kubenswrapper[4619]: I0126 11:00:21.272924 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74eaac61-fd26-4596-ab74-d1282c0baf2b" path="/var/lib/kubelet/pods/74eaac61-fd26-4596-ab74-d1282c0baf2b/volumes" Jan 26 11:00:21 crc kubenswrapper[4619]: I0126 11:00:21.274199 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a310950-4656-4955-b453-e846f29f47d8" path="/var/lib/kubelet/pods/7a310950-4656-4955-b453-e846f29f47d8/volumes" Jan 26 11:00:21 crc kubenswrapper[4619]: I0126 11:00:21.274929 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87b07f24-92c0-4190-a140-6029e82f826d" path="/var/lib/kubelet/pods/87b07f24-92c0-4190-a140-6029e82f826d/volumes" Jan 26 11:00:23 crc kubenswrapper[4619]: I0126 11:00:23.859435 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 11:00:25 crc kubenswrapper[4619]: I0126 11:00:25.247303 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 11:00:25 crc kubenswrapper[4619]: I0126 11:00:25.594183 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 11:00:27 crc kubenswrapper[4619]: I0126 11:00:27.528321 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 11:00:27 crc kubenswrapper[4619]: I0126 11:00:27.531599 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 11:00:28 crc kubenswrapper[4619]: I0126 11:00:28.142710 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 11:00:33 crc kubenswrapper[4619]: I0126 11:00:33.793015 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ngtbq"] Jan 26 11:00:33 crc kubenswrapper[4619]: E0126 11:00:33.793672 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74eaac61-fd26-4596-ab74-d1282c0baf2b" containerName="extract-utilities" Jan 26 11:00:33 crc kubenswrapper[4619]: I0126 11:00:33.793684 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="74eaac61-fd26-4596-ab74-d1282c0baf2b" containerName="extract-utilities" Jan 26 11:00:33 crc kubenswrapper[4619]: E0126 11:00:33.793694 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d6d6055-6441-4f47-8107-8886901691cc" containerName="extract-content" Jan 26 11:00:33 crc kubenswrapper[4619]: I0126 11:00:33.793700 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d6d6055-6441-4f47-8107-8886901691cc" containerName="extract-content" Jan 26 11:00:33 crc kubenswrapper[4619]: E0126 11:00:33.793711 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74eaac61-fd26-4596-ab74-d1282c0baf2b" containerName="registry-server" Jan 26 11:00:33 crc kubenswrapper[4619]: I0126 11:00:33.793717 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="74eaac61-fd26-4596-ab74-d1282c0baf2b" containerName="registry-server" Jan 26 11:00:33 crc kubenswrapper[4619]: E0126 11:00:33.793725 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="722ff7bc-6563-4562-b96f-430b1b2fedd1" containerName="registry-server" Jan 26 11:00:33 crc kubenswrapper[4619]: I0126 11:00:33.793732 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="722ff7bc-6563-4562-b96f-430b1b2fedd1" containerName="registry-server" Jan 26 11:00:33 crc kubenswrapper[4619]: E0126 11:00:33.793742 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a310950-4656-4955-b453-e846f29f47d8" containerName="extract-utilities" Jan 26 11:00:33 crc kubenswrapper[4619]: I0126 11:00:33.793747 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a310950-4656-4955-b453-e846f29f47d8" containerName="extract-utilities" Jan 26 11:00:33 crc kubenswrapper[4619]: E0126 11:00:33.793755 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b07f24-92c0-4190-a140-6029e82f826d" containerName="marketplace-operator" Jan 26 11:00:33 crc kubenswrapper[4619]: I0126 11:00:33.793761 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b07f24-92c0-4190-a140-6029e82f826d" containerName="marketplace-operator" Jan 26 11:00:33 crc kubenswrapper[4619]: E0126 11:00:33.793767 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74eaac61-fd26-4596-ab74-d1282c0baf2b" containerName="extract-content" Jan 26 11:00:33 crc kubenswrapper[4619]: I0126 11:00:33.793773 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="74eaac61-fd26-4596-ab74-d1282c0baf2b" containerName="extract-content" Jan 26 11:00:33 crc kubenswrapper[4619]: E0126 11:00:33.793781 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a310950-4656-4955-b453-e846f29f47d8" containerName="extract-content" Jan 26 11:00:33 crc kubenswrapper[4619]: I0126 11:00:33.793787 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a310950-4656-4955-b453-e846f29f47d8" containerName="extract-content" Jan 26 11:00:33 crc kubenswrapper[4619]: E0126 11:00:33.793794 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9985263d-2046-4641-b6cb-235bc8403d32" containerName="collect-profiles" Jan 26 11:00:33 crc kubenswrapper[4619]: I0126 11:00:33.793799 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="9985263d-2046-4641-b6cb-235bc8403d32" containerName="collect-profiles" Jan 26 11:00:33 crc kubenswrapper[4619]: E0126 11:00:33.793808 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d6d6055-6441-4f47-8107-8886901691cc" containerName="registry-server" Jan 26 11:00:33 crc kubenswrapper[4619]: I0126 11:00:33.793813 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d6d6055-6441-4f47-8107-8886901691cc" containerName="registry-server" Jan 26 11:00:33 crc kubenswrapper[4619]: E0126 11:00:33.793821 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="722ff7bc-6563-4562-b96f-430b1b2fedd1" containerName="extract-content" Jan 26 11:00:33 crc kubenswrapper[4619]: I0126 11:00:33.793827 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="722ff7bc-6563-4562-b96f-430b1b2fedd1" containerName="extract-content" Jan 26 11:00:33 crc kubenswrapper[4619]: E0126 11:00:33.793834 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a310950-4656-4955-b453-e846f29f47d8" containerName="registry-server" Jan 26 11:00:33 crc kubenswrapper[4619]: I0126 11:00:33.793840 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a310950-4656-4955-b453-e846f29f47d8" containerName="registry-server" Jan 26 11:00:33 crc kubenswrapper[4619]: E0126 11:00:33.793847 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d6d6055-6441-4f47-8107-8886901691cc" containerName="extract-utilities" Jan 26 11:00:33 crc kubenswrapper[4619]: I0126 11:00:33.793853 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d6d6055-6441-4f47-8107-8886901691cc" containerName="extract-utilities" Jan 26 11:00:33 crc kubenswrapper[4619]: E0126 11:00:33.793860 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="722ff7bc-6563-4562-b96f-430b1b2fedd1" containerName="extract-utilities" Jan 26 11:00:33 crc kubenswrapper[4619]: I0126 11:00:33.793866 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="722ff7bc-6563-4562-b96f-430b1b2fedd1" containerName="extract-utilities" Jan 26 11:00:33 crc kubenswrapper[4619]: I0126 11:00:33.793947 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d6d6055-6441-4f47-8107-8886901691cc" containerName="registry-server" Jan 26 11:00:33 crc kubenswrapper[4619]: I0126 11:00:33.793957 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="722ff7bc-6563-4562-b96f-430b1b2fedd1" containerName="registry-server" Jan 26 11:00:33 crc kubenswrapper[4619]: I0126 11:00:33.793964 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="74eaac61-fd26-4596-ab74-d1282c0baf2b" containerName="registry-server" Jan 26 11:00:33 crc kubenswrapper[4619]: I0126 11:00:33.793970 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="9985263d-2046-4641-b6cb-235bc8403d32" containerName="collect-profiles" Jan 26 11:00:33 crc kubenswrapper[4619]: I0126 11:00:33.793980 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="87b07f24-92c0-4190-a140-6029e82f826d" containerName="marketplace-operator" Jan 26 11:00:33 crc kubenswrapper[4619]: I0126 11:00:33.793994 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a310950-4656-4955-b453-e846f29f47d8" containerName="registry-server" Jan 26 11:00:33 crc kubenswrapper[4619]: I0126 11:00:33.795356 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ngtbq" Jan 26 11:00:33 crc kubenswrapper[4619]: I0126 11:00:33.798330 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 11:00:33 crc kubenswrapper[4619]: I0126 11:00:33.799034 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 11:00:33 crc kubenswrapper[4619]: I0126 11:00:33.799173 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 11:00:33 crc kubenswrapper[4619]: I0126 11:00:33.815683 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ngtbq"] Jan 26 11:00:33 crc kubenswrapper[4619]: I0126 11:00:33.975931 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82722d3b-1952-40e3-87db-a9f4d4c9b83a-catalog-content\") pod \"community-operators-ngtbq\" (UID: \"82722d3b-1952-40e3-87db-a9f4d4c9b83a\") " pod="openshift-marketplace/community-operators-ngtbq" Jan 26 11:00:33 crc kubenswrapper[4619]: I0126 11:00:33.976272 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8rfr\" (UniqueName: \"kubernetes.io/projected/82722d3b-1952-40e3-87db-a9f4d4c9b83a-kube-api-access-v8rfr\") pod \"community-operators-ngtbq\" (UID: \"82722d3b-1952-40e3-87db-a9f4d4c9b83a\") " pod="openshift-marketplace/community-operators-ngtbq" Jan 26 11:00:33 crc kubenswrapper[4619]: I0126 11:00:33.976434 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82722d3b-1952-40e3-87db-a9f4d4c9b83a-utilities\") pod \"community-operators-ngtbq\" (UID: \"82722d3b-1952-40e3-87db-a9f4d4c9b83a\") " pod="openshift-marketplace/community-operators-ngtbq" Jan 26 11:00:34 crc kubenswrapper[4619]: I0126 11:00:34.078121 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8rfr\" (UniqueName: \"kubernetes.io/projected/82722d3b-1952-40e3-87db-a9f4d4c9b83a-kube-api-access-v8rfr\") pod \"community-operators-ngtbq\" (UID: \"82722d3b-1952-40e3-87db-a9f4d4c9b83a\") " pod="openshift-marketplace/community-operators-ngtbq" Jan 26 11:00:34 crc kubenswrapper[4619]: I0126 11:00:34.078469 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82722d3b-1952-40e3-87db-a9f4d4c9b83a-utilities\") pod \"community-operators-ngtbq\" (UID: \"82722d3b-1952-40e3-87db-a9f4d4c9b83a\") " pod="openshift-marketplace/community-operators-ngtbq" Jan 26 11:00:34 crc kubenswrapper[4619]: I0126 11:00:34.078592 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82722d3b-1952-40e3-87db-a9f4d4c9b83a-catalog-content\") pod \"community-operators-ngtbq\" (UID: \"82722d3b-1952-40e3-87db-a9f4d4c9b83a\") " pod="openshift-marketplace/community-operators-ngtbq" Jan 26 11:00:34 crc kubenswrapper[4619]: I0126 11:00:34.079009 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82722d3b-1952-40e3-87db-a9f4d4c9b83a-catalog-content\") pod \"community-operators-ngtbq\" (UID: \"82722d3b-1952-40e3-87db-a9f4d4c9b83a\") " pod="openshift-marketplace/community-operators-ngtbq" Jan 26 11:00:34 crc kubenswrapper[4619]: I0126 11:00:34.079021 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82722d3b-1952-40e3-87db-a9f4d4c9b83a-utilities\") pod \"community-operators-ngtbq\" (UID: \"82722d3b-1952-40e3-87db-a9f4d4c9b83a\") " pod="openshift-marketplace/community-operators-ngtbq" Jan 26 11:00:34 crc kubenswrapper[4619]: I0126 11:00:34.097594 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8rfr\" (UniqueName: \"kubernetes.io/projected/82722d3b-1952-40e3-87db-a9f4d4c9b83a-kube-api-access-v8rfr\") pod \"community-operators-ngtbq\" (UID: \"82722d3b-1952-40e3-87db-a9f4d4c9b83a\") " pod="openshift-marketplace/community-operators-ngtbq" Jan 26 11:00:34 crc kubenswrapper[4619]: I0126 11:00:34.119845 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ngtbq" Jan 26 11:00:34 crc kubenswrapper[4619]: I0126 11:00:34.257483 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 11:00:34 crc kubenswrapper[4619]: I0126 11:00:34.339726 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ngtbq"] Jan 26 11:00:34 crc kubenswrapper[4619]: I0126 11:00:34.559392 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9klzt"] Jan 26 11:00:34 crc kubenswrapper[4619]: I0126 11:00:34.560332 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9klzt" Jan 26 11:00:34 crc kubenswrapper[4619]: I0126 11:00:34.570107 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 11:00:34 crc kubenswrapper[4619]: I0126 11:00:34.570354 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9klzt"] Jan 26 11:00:34 crc kubenswrapper[4619]: I0126 11:00:34.570549 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 11:00:34 crc kubenswrapper[4619]: I0126 11:00:34.576449 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 11:00:34 crc kubenswrapper[4619]: I0126 11:00:34.691416 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d5665bb8-e8d9-4970-a1d0-db862b679458-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9klzt\" (UID: \"d5665bb8-e8d9-4970-a1d0-db862b679458\") " pod="openshift-marketplace/marketplace-operator-79b997595-9klzt" Jan 26 11:00:34 crc kubenswrapper[4619]: I0126 11:00:34.691492 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5665bb8-e8d9-4970-a1d0-db862b679458-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9klzt\" (UID: \"d5665bb8-e8d9-4970-a1d0-db862b679458\") " pod="openshift-marketplace/marketplace-operator-79b997595-9klzt" Jan 26 11:00:34 crc kubenswrapper[4619]: I0126 11:00:34.691520 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wvb7\" (UniqueName: \"kubernetes.io/projected/d5665bb8-e8d9-4970-a1d0-db862b679458-kube-api-access-4wvb7\") pod \"marketplace-operator-79b997595-9klzt\" (UID: \"d5665bb8-e8d9-4970-a1d0-db862b679458\") " pod="openshift-marketplace/marketplace-operator-79b997595-9klzt" Jan 26 11:00:34 crc kubenswrapper[4619]: I0126 11:00:34.712505 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bctm2"] Jan 26 11:00:34 crc kubenswrapper[4619]: I0126 11:00:34.712717 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bctm2" podUID="41dc8f80-5742-4e4b-943e-571ad0e59027" containerName="route-controller-manager" containerID="cri-o://7a0a375ea9f7f7803fc6394e93d3ed5062e0b64d1272c0a0710dc1c48a190c13" gracePeriod=30 Jan 26 11:00:34 crc kubenswrapper[4619]: I0126 11:00:34.721658 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gwzgx"] Jan 26 11:00:34 crc kubenswrapper[4619]: I0126 11:00:34.721909 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-gwzgx" podUID="5bcc19ee-a154-482d-84f3-3c8aed73db25" containerName="controller-manager" containerID="cri-o://b92f8d2dc9bd1587d57f2ec206986ed3f30ffae41beaa588926f0215d6267611" gracePeriod=30 Jan 26 11:00:34 crc kubenswrapper[4619]: I0126 11:00:34.792998 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d5665bb8-e8d9-4970-a1d0-db862b679458-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9klzt\" (UID: \"d5665bb8-e8d9-4970-a1d0-db862b679458\") " pod="openshift-marketplace/marketplace-operator-79b997595-9klzt" Jan 26 11:00:34 crc kubenswrapper[4619]: I0126 11:00:34.793058 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5665bb8-e8d9-4970-a1d0-db862b679458-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9klzt\" (UID: \"d5665bb8-e8d9-4970-a1d0-db862b679458\") " pod="openshift-marketplace/marketplace-operator-79b997595-9klzt" Jan 26 11:00:34 crc kubenswrapper[4619]: I0126 11:00:34.793081 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wvb7\" (UniqueName: \"kubernetes.io/projected/d5665bb8-e8d9-4970-a1d0-db862b679458-kube-api-access-4wvb7\") pod \"marketplace-operator-79b997595-9klzt\" (UID: \"d5665bb8-e8d9-4970-a1d0-db862b679458\") " pod="openshift-marketplace/marketplace-operator-79b997595-9klzt" Jan 26 11:00:34 crc kubenswrapper[4619]: I0126 11:00:34.794461 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5665bb8-e8d9-4970-a1d0-db862b679458-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9klzt\" (UID: \"d5665bb8-e8d9-4970-a1d0-db862b679458\") " pod="openshift-marketplace/marketplace-operator-79b997595-9klzt" Jan 26 11:00:34 crc kubenswrapper[4619]: I0126 11:00:34.801062 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d5665bb8-e8d9-4970-a1d0-db862b679458-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9klzt\" (UID: \"d5665bb8-e8d9-4970-a1d0-db862b679458\") " pod="openshift-marketplace/marketplace-operator-79b997595-9klzt" Jan 26 11:00:34 crc kubenswrapper[4619]: I0126 11:00:34.816310 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wvb7\" (UniqueName: \"kubernetes.io/projected/d5665bb8-e8d9-4970-a1d0-db862b679458-kube-api-access-4wvb7\") pod \"marketplace-operator-79b997595-9klzt\" (UID: \"d5665bb8-e8d9-4970-a1d0-db862b679458\") " pod="openshift-marketplace/marketplace-operator-79b997595-9klzt" Jan 26 11:00:34 crc kubenswrapper[4619]: I0126 11:00:34.907833 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9klzt" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.168995 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bctm2" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.184241 4619 generic.go:334] "Generic (PLEG): container finished" podID="41dc8f80-5742-4e4b-943e-571ad0e59027" containerID="7a0a375ea9f7f7803fc6394e93d3ed5062e0b64d1272c0a0710dc1c48a190c13" exitCode=0 Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.184308 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bctm2" event={"ID":"41dc8f80-5742-4e4b-943e-571ad0e59027","Type":"ContainerDied","Data":"7a0a375ea9f7f7803fc6394e93d3ed5062e0b64d1272c0a0710dc1c48a190c13"} Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.184338 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bctm2" event={"ID":"41dc8f80-5742-4e4b-943e-571ad0e59027","Type":"ContainerDied","Data":"b00b29b64c7a15b35ae5232c8f63469f95fd6bbe2850b904aeb567299ef45911"} Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.184354 4619 scope.go:117] "RemoveContainer" containerID="7a0a375ea9f7f7803fc6394e93d3ed5062e0b64d1272c0a0710dc1c48a190c13" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.184461 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bctm2" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.194285 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pkzzx"] Jan 26 11:00:35 crc kubenswrapper[4619]: E0126 11:00:35.194601 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41dc8f80-5742-4e4b-943e-571ad0e59027" containerName="route-controller-manager" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.194665 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="41dc8f80-5742-4e4b-943e-571ad0e59027" containerName="route-controller-manager" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.195900 4619 generic.go:334] "Generic (PLEG): container finished" podID="5bcc19ee-a154-482d-84f3-3c8aed73db25" containerID="b92f8d2dc9bd1587d57f2ec206986ed3f30ffae41beaa588926f0215d6267611" exitCode=0 Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.196035 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="41dc8f80-5742-4e4b-943e-571ad0e59027" containerName="route-controller-manager" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.200307 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pkzzx" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.202767 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gwzgx" event={"ID":"5bcc19ee-a154-482d-84f3-3c8aed73db25","Type":"ContainerDied","Data":"b92f8d2dc9bd1587d57f2ec206986ed3f30ffae41beaa588926f0215d6267611"} Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.206522 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.210076 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41dc8f80-5742-4e4b-943e-571ad0e59027-config\") pod \"41dc8f80-5742-4e4b-943e-571ad0e59027\" (UID: \"41dc8f80-5742-4e4b-943e-571ad0e59027\") " Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.210156 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt7f9\" (UniqueName: \"kubernetes.io/projected/41dc8f80-5742-4e4b-943e-571ad0e59027-kube-api-access-kt7f9\") pod \"41dc8f80-5742-4e4b-943e-571ad0e59027\" (UID: \"41dc8f80-5742-4e4b-943e-571ad0e59027\") " Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.210188 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41dc8f80-5742-4e4b-943e-571ad0e59027-serving-cert\") pod \"41dc8f80-5742-4e4b-943e-571ad0e59027\" (UID: \"41dc8f80-5742-4e4b-943e-571ad0e59027\") " Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.210249 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41dc8f80-5742-4e4b-943e-571ad0e59027-client-ca\") pod \"41dc8f80-5742-4e4b-943e-571ad0e59027\" (UID: \"41dc8f80-5742-4e4b-943e-571ad0e59027\") " Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.210413 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b-utilities\") pod \"redhat-marketplace-pkzzx\" (UID: \"2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b\") " pod="openshift-marketplace/redhat-marketplace-pkzzx" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.210456 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b-catalog-content\") pod \"redhat-marketplace-pkzzx\" (UID: \"2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b\") " pod="openshift-marketplace/redhat-marketplace-pkzzx" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.210476 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7snh\" (UniqueName: \"kubernetes.io/projected/2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b-kube-api-access-w7snh\") pod \"redhat-marketplace-pkzzx\" (UID: \"2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b\") " pod="openshift-marketplace/redhat-marketplace-pkzzx" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.217738 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41dc8f80-5742-4e4b-943e-571ad0e59027-client-ca" (OuterVolumeSpecName: "client-ca") pod "41dc8f80-5742-4e4b-943e-571ad0e59027" (UID: "41dc8f80-5742-4e4b-943e-571ad0e59027"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.218362 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41dc8f80-5742-4e4b-943e-571ad0e59027-config" (OuterVolumeSpecName: "config") pod "41dc8f80-5742-4e4b-943e-571ad0e59027" (UID: "41dc8f80-5742-4e4b-943e-571ad0e59027"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.224451 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pkzzx"] Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.224942 4619 generic.go:334] "Generic (PLEG): container finished" podID="82722d3b-1952-40e3-87db-a9f4d4c9b83a" containerID="475e2f03beeabed2c4d89eb8b2f75f613df0a990ea6e3c39aebb93df2d9699f3" exitCode=0 Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.224972 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngtbq" event={"ID":"82722d3b-1952-40e3-87db-a9f4d4c9b83a","Type":"ContainerDied","Data":"475e2f03beeabed2c4d89eb8b2f75f613df0a990ea6e3c39aebb93df2d9699f3"} Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.224989 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngtbq" event={"ID":"82722d3b-1952-40e3-87db-a9f4d4c9b83a","Type":"ContainerStarted","Data":"7c46844b742c475cd0881512fdc3730fd0b0bb26ebda46aec12194d99d406e64"} Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.234860 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41dc8f80-5742-4e4b-943e-571ad0e59027-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "41dc8f80-5742-4e4b-943e-571ad0e59027" (UID: "41dc8f80-5742-4e4b-943e-571ad0e59027"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.237654 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41dc8f80-5742-4e4b-943e-571ad0e59027-kube-api-access-kt7f9" (OuterVolumeSpecName: "kube-api-access-kt7f9") pod "41dc8f80-5742-4e4b-943e-571ad0e59027" (UID: "41dc8f80-5742-4e4b-943e-571ad0e59027"). InnerVolumeSpecName "kube-api-access-kt7f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.312456 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b-utilities\") pod \"redhat-marketplace-pkzzx\" (UID: \"2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b\") " pod="openshift-marketplace/redhat-marketplace-pkzzx" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.312524 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b-catalog-content\") pod \"redhat-marketplace-pkzzx\" (UID: \"2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b\") " pod="openshift-marketplace/redhat-marketplace-pkzzx" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.312550 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7snh\" (UniqueName: \"kubernetes.io/projected/2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b-kube-api-access-w7snh\") pod \"redhat-marketplace-pkzzx\" (UID: \"2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b\") " pod="openshift-marketplace/redhat-marketplace-pkzzx" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.313105 4619 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41dc8f80-5742-4e4b-943e-571ad0e59027-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.313156 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41dc8f80-5742-4e4b-943e-571ad0e59027-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.313181 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt7f9\" (UniqueName: \"kubernetes.io/projected/41dc8f80-5742-4e4b-943e-571ad0e59027-kube-api-access-kt7f9\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.313226 4619 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41dc8f80-5742-4e4b-943e-571ad0e59027-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.313835 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b-utilities\") pod \"redhat-marketplace-pkzzx\" (UID: \"2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b\") " pod="openshift-marketplace/redhat-marketplace-pkzzx" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.314065 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b-catalog-content\") pod \"redhat-marketplace-pkzzx\" (UID: \"2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b\") " pod="openshift-marketplace/redhat-marketplace-pkzzx" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.369430 4619 scope.go:117] "RemoveContainer" containerID="7a0a375ea9f7f7803fc6394e93d3ed5062e0b64d1272c0a0710dc1c48a190c13" Jan 26 11:00:35 crc kubenswrapper[4619]: E0126 11:00:35.376304 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a0a375ea9f7f7803fc6394e93d3ed5062e0b64d1272c0a0710dc1c48a190c13\": container with ID starting with 7a0a375ea9f7f7803fc6394e93d3ed5062e0b64d1272c0a0710dc1c48a190c13 not found: ID does not exist" containerID="7a0a375ea9f7f7803fc6394e93d3ed5062e0b64d1272c0a0710dc1c48a190c13" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.376352 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a0a375ea9f7f7803fc6394e93d3ed5062e0b64d1272c0a0710dc1c48a190c13"} err="failed to get container status \"7a0a375ea9f7f7803fc6394e93d3ed5062e0b64d1272c0a0710dc1c48a190c13\": rpc error: code = NotFound desc = could not find container \"7a0a375ea9f7f7803fc6394e93d3ed5062e0b64d1272c0a0710dc1c48a190c13\": container with ID starting with 7a0a375ea9f7f7803fc6394e93d3ed5062e0b64d1272c0a0710dc1c48a190c13 not found: ID does not exist" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.376876 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gwzgx" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.381113 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7snh\" (UniqueName: \"kubernetes.io/projected/2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b-kube-api-access-w7snh\") pod \"redhat-marketplace-pkzzx\" (UID: \"2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b\") " pod="openshift-marketplace/redhat-marketplace-pkzzx" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.385856 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9klzt"] Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.416273 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5bcc19ee-a154-482d-84f3-3c8aed73db25-proxy-ca-bundles\") pod \"5bcc19ee-a154-482d-84f3-3c8aed73db25\" (UID: \"5bcc19ee-a154-482d-84f3-3c8aed73db25\") " Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.416312 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5bcc19ee-a154-482d-84f3-3c8aed73db25-client-ca\") pod \"5bcc19ee-a154-482d-84f3-3c8aed73db25\" (UID: \"5bcc19ee-a154-482d-84f3-3c8aed73db25\") " Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.416330 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzst4\" (UniqueName: \"kubernetes.io/projected/5bcc19ee-a154-482d-84f3-3c8aed73db25-kube-api-access-hzst4\") pod \"5bcc19ee-a154-482d-84f3-3c8aed73db25\" (UID: \"5bcc19ee-a154-482d-84f3-3c8aed73db25\") " Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.416352 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bcc19ee-a154-482d-84f3-3c8aed73db25-config\") pod \"5bcc19ee-a154-482d-84f3-3c8aed73db25\" (UID: \"5bcc19ee-a154-482d-84f3-3c8aed73db25\") " Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.416379 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bcc19ee-a154-482d-84f3-3c8aed73db25-serving-cert\") pod \"5bcc19ee-a154-482d-84f3-3c8aed73db25\" (UID: \"5bcc19ee-a154-482d-84f3-3c8aed73db25\") " Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.417443 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bcc19ee-a154-482d-84f3-3c8aed73db25-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5bcc19ee-a154-482d-84f3-3c8aed73db25" (UID: "5bcc19ee-a154-482d-84f3-3c8aed73db25"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.418050 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bcc19ee-a154-482d-84f3-3c8aed73db25-client-ca" (OuterVolumeSpecName: "client-ca") pod "5bcc19ee-a154-482d-84f3-3c8aed73db25" (UID: "5bcc19ee-a154-482d-84f3-3c8aed73db25"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.433504 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bcc19ee-a154-482d-84f3-3c8aed73db25-config" (OuterVolumeSpecName: "config") pod "5bcc19ee-a154-482d-84f3-3c8aed73db25" (UID: "5bcc19ee-a154-482d-84f3-3c8aed73db25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.441070 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bcc19ee-a154-482d-84f3-3c8aed73db25-kube-api-access-hzst4" (OuterVolumeSpecName: "kube-api-access-hzst4") pod "5bcc19ee-a154-482d-84f3-3c8aed73db25" (UID: "5bcc19ee-a154-482d-84f3-3c8aed73db25"). InnerVolumeSpecName "kube-api-access-hzst4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.462902 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bcc19ee-a154-482d-84f3-3c8aed73db25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5bcc19ee-a154-482d-84f3-3c8aed73db25" (UID: "5bcc19ee-a154-482d-84f3-3c8aed73db25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.508325 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bctm2"] Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.512185 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bctm2"] Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.517251 4619 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bcc19ee-a154-482d-84f3-3c8aed73db25-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.517280 4619 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5bcc19ee-a154-482d-84f3-3c8aed73db25-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.517290 4619 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5bcc19ee-a154-482d-84f3-3c8aed73db25-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.517302 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzst4\" (UniqueName: \"kubernetes.io/projected/5bcc19ee-a154-482d-84f3-3c8aed73db25-kube-api-access-hzst4\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.517313 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bcc19ee-a154-482d-84f3-3c8aed73db25-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.602280 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pkzzx" Jan 26 11:00:35 crc kubenswrapper[4619]: I0126 11:00:35.830140 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pkzzx"] Jan 26 11:00:35 crc kubenswrapper[4619]: W0126 11:00:35.834062 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fc177c5_7cbd_4962_97d4_89b9b2f7ba3b.slice/crio-802ecfb9ce270fe08ab9cf92e03f319eebcc968d476ac0bbd62c15ddb61e970e WatchSource:0}: Error finding container 802ecfb9ce270fe08ab9cf92e03f319eebcc968d476ac0bbd62c15ddb61e970e: Status 404 returned error can't find the container with id 802ecfb9ce270fe08ab9cf92e03f319eebcc968d476ac0bbd62c15ddb61e970e Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.195542 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lqtxs"] Jan 26 11:00:36 crc kubenswrapper[4619]: E0126 11:00:36.195824 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bcc19ee-a154-482d-84f3-3c8aed73db25" containerName="controller-manager" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.195844 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bcc19ee-a154-482d-84f3-3c8aed73db25" containerName="controller-manager" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.195967 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bcc19ee-a154-482d-84f3-3c8aed73db25" containerName="controller-manager" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.196767 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lqtxs" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.198715 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.212444 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lqtxs"] Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.231292 4619 generic.go:334] "Generic (PLEG): container finished" podID="2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b" containerID="31866ef27cb1c12812fe5106a396c8461f7bd774c01cd22e096d3bff01750079" exitCode=0 Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.231372 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pkzzx" event={"ID":"2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b","Type":"ContainerDied","Data":"31866ef27cb1c12812fe5106a396c8461f7bd774c01cd22e096d3bff01750079"} Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.231400 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pkzzx" event={"ID":"2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b","Type":"ContainerStarted","Data":"802ecfb9ce270fe08ab9cf92e03f319eebcc968d476ac0bbd62c15ddb61e970e"} Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.237555 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gwzgx" event={"ID":"5bcc19ee-a154-482d-84f3-3c8aed73db25","Type":"ContainerDied","Data":"5545d2c1f70de6adc3606b2f3f435c4061dacf88cc9f73fd7d903a7606478f7d"} Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.237570 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gwzgx" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.237633 4619 scope.go:117] "RemoveContainer" containerID="b92f8d2dc9bd1587d57f2ec206986ed3f30ffae41beaa588926f0215d6267611" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.241552 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngtbq" event={"ID":"82722d3b-1952-40e3-87db-a9f4d4c9b83a","Type":"ContainerStarted","Data":"f8917b0da83d0f5f8fa6ab9d7331f997c8f930365af5e60cb42e9978f7764155"} Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.244700 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9klzt" event={"ID":"d5665bb8-e8d9-4970-a1d0-db862b679458","Type":"ContainerStarted","Data":"6532c02b347870a0129673655f27b3e359ee9e5387dca7ed2e76529f566f1e0c"} Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.244728 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9klzt" event={"ID":"d5665bb8-e8d9-4970-a1d0-db862b679458","Type":"ContainerStarted","Data":"4f0085d70921f09e5c4bb5c9d08108daf431e4a489a0272ba9be5d717ffce345"} Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.246305 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-9klzt" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.247053 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-9klzt" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.334682 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvpbt\" (UniqueName: \"kubernetes.io/projected/330030a7-d5b2-44ba-8612-30cd6ff41451-kube-api-access-fvpbt\") pod \"redhat-operators-lqtxs\" (UID: \"330030a7-d5b2-44ba-8612-30cd6ff41451\") " pod="openshift-marketplace/redhat-operators-lqtxs" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.334778 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/330030a7-d5b2-44ba-8612-30cd6ff41451-utilities\") pod \"redhat-operators-lqtxs\" (UID: \"330030a7-d5b2-44ba-8612-30cd6ff41451\") " pod="openshift-marketplace/redhat-operators-lqtxs" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.334878 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/330030a7-d5b2-44ba-8612-30cd6ff41451-catalog-content\") pod \"redhat-operators-lqtxs\" (UID: \"330030a7-d5b2-44ba-8612-30cd6ff41451\") " pod="openshift-marketplace/redhat-operators-lqtxs" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.346841 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-9klzt" podStartSLOduration=2.346825037 podStartE2EDuration="2.346825037s" podCreationTimestamp="2026-01-26 11:00:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:00:36.344351919 +0000 UTC m=+335.378392635" watchObservedRunningTime="2026-01-26 11:00:36.346825037 +0000 UTC m=+335.380865753" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.357771 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gwzgx"] Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.360407 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gwzgx"] Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.436183 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvpbt\" (UniqueName: \"kubernetes.io/projected/330030a7-d5b2-44ba-8612-30cd6ff41451-kube-api-access-fvpbt\") pod \"redhat-operators-lqtxs\" (UID: \"330030a7-d5b2-44ba-8612-30cd6ff41451\") " pod="openshift-marketplace/redhat-operators-lqtxs" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.436507 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/330030a7-d5b2-44ba-8612-30cd6ff41451-utilities\") pod \"redhat-operators-lqtxs\" (UID: \"330030a7-d5b2-44ba-8612-30cd6ff41451\") " pod="openshift-marketplace/redhat-operators-lqtxs" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.436571 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/330030a7-d5b2-44ba-8612-30cd6ff41451-catalog-content\") pod \"redhat-operators-lqtxs\" (UID: \"330030a7-d5b2-44ba-8612-30cd6ff41451\") " pod="openshift-marketplace/redhat-operators-lqtxs" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.437002 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/330030a7-d5b2-44ba-8612-30cd6ff41451-catalog-content\") pod \"redhat-operators-lqtxs\" (UID: \"330030a7-d5b2-44ba-8612-30cd6ff41451\") " pod="openshift-marketplace/redhat-operators-lqtxs" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.437130 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/330030a7-d5b2-44ba-8612-30cd6ff41451-utilities\") pod \"redhat-operators-lqtxs\" (UID: \"330030a7-d5b2-44ba-8612-30cd6ff41451\") " pod="openshift-marketplace/redhat-operators-lqtxs" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.450019 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7766474774-mp9gv"] Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.450958 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7766474774-mp9gv" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.452745 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f77dfc79b-b26gp"] Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.453308 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.453412 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f77dfc79b-b26gp" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.459075 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.459365 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.460545 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.460641 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.461356 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvpbt\" (UniqueName: \"kubernetes.io/projected/330030a7-d5b2-44ba-8612-30cd6ff41451-kube-api-access-fvpbt\") pod \"redhat-operators-lqtxs\" (UID: \"330030a7-d5b2-44ba-8612-30cd6ff41451\") " pod="openshift-marketplace/redhat-operators-lqtxs" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.464552 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.465151 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.466295 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7766474774-mp9gv"] Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.466933 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.468058 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.468296 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.468410 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.468515 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.472722 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.493534 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f77dfc79b-b26gp"] Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.508930 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lqtxs" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.538149 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db3b37a6-b691-4c0d-ac00-8e26b839b5ca-client-ca\") pod \"route-controller-manager-6f77dfc79b-b26gp\" (UID: \"db3b37a6-b691-4c0d-ac00-8e26b839b5ca\") " pod="openshift-route-controller-manager/route-controller-manager-6f77dfc79b-b26gp" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.538205 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db3b37a6-b691-4c0d-ac00-8e26b839b5ca-serving-cert\") pod \"route-controller-manager-6f77dfc79b-b26gp\" (UID: \"db3b37a6-b691-4c0d-ac00-8e26b839b5ca\") " pod="openshift-route-controller-manager/route-controller-manager-6f77dfc79b-b26gp" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.538238 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/360f4dc2-2dd6-4be1-b141-83ed024bf834-proxy-ca-bundles\") pod \"controller-manager-7766474774-mp9gv\" (UID: \"360f4dc2-2dd6-4be1-b141-83ed024bf834\") " pod="openshift-controller-manager/controller-manager-7766474774-mp9gv" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.538335 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/360f4dc2-2dd6-4be1-b141-83ed024bf834-config\") pod \"controller-manager-7766474774-mp9gv\" (UID: \"360f4dc2-2dd6-4be1-b141-83ed024bf834\") " pod="openshift-controller-manager/controller-manager-7766474774-mp9gv" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.538445 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfpvx\" (UniqueName: \"kubernetes.io/projected/db3b37a6-b691-4c0d-ac00-8e26b839b5ca-kube-api-access-vfpvx\") pod \"route-controller-manager-6f77dfc79b-b26gp\" (UID: \"db3b37a6-b691-4c0d-ac00-8e26b839b5ca\") " pod="openshift-route-controller-manager/route-controller-manager-6f77dfc79b-b26gp" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.538516 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/360f4dc2-2dd6-4be1-b141-83ed024bf834-client-ca\") pod \"controller-manager-7766474774-mp9gv\" (UID: \"360f4dc2-2dd6-4be1-b141-83ed024bf834\") " pod="openshift-controller-manager/controller-manager-7766474774-mp9gv" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.538546 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmrk4\" (UniqueName: \"kubernetes.io/projected/360f4dc2-2dd6-4be1-b141-83ed024bf834-kube-api-access-bmrk4\") pod \"controller-manager-7766474774-mp9gv\" (UID: \"360f4dc2-2dd6-4be1-b141-83ed024bf834\") " pod="openshift-controller-manager/controller-manager-7766474774-mp9gv" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.538583 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db3b37a6-b691-4c0d-ac00-8e26b839b5ca-config\") pod \"route-controller-manager-6f77dfc79b-b26gp\" (UID: \"db3b37a6-b691-4c0d-ac00-8e26b839b5ca\") " pod="openshift-route-controller-manager/route-controller-manager-6f77dfc79b-b26gp" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.538704 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/360f4dc2-2dd6-4be1-b141-83ed024bf834-serving-cert\") pod \"controller-manager-7766474774-mp9gv\" (UID: \"360f4dc2-2dd6-4be1-b141-83ed024bf834\") " pod="openshift-controller-manager/controller-manager-7766474774-mp9gv" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.639791 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/360f4dc2-2dd6-4be1-b141-83ed024bf834-client-ca\") pod \"controller-manager-7766474774-mp9gv\" (UID: \"360f4dc2-2dd6-4be1-b141-83ed024bf834\") " pod="openshift-controller-manager/controller-manager-7766474774-mp9gv" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.639840 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmrk4\" (UniqueName: \"kubernetes.io/projected/360f4dc2-2dd6-4be1-b141-83ed024bf834-kube-api-access-bmrk4\") pod \"controller-manager-7766474774-mp9gv\" (UID: \"360f4dc2-2dd6-4be1-b141-83ed024bf834\") " pod="openshift-controller-manager/controller-manager-7766474774-mp9gv" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.639879 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db3b37a6-b691-4c0d-ac00-8e26b839b5ca-config\") pod \"route-controller-manager-6f77dfc79b-b26gp\" (UID: \"db3b37a6-b691-4c0d-ac00-8e26b839b5ca\") " pod="openshift-route-controller-manager/route-controller-manager-6f77dfc79b-b26gp" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.639901 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/360f4dc2-2dd6-4be1-b141-83ed024bf834-serving-cert\") pod \"controller-manager-7766474774-mp9gv\" (UID: \"360f4dc2-2dd6-4be1-b141-83ed024bf834\") " pod="openshift-controller-manager/controller-manager-7766474774-mp9gv" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.639936 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db3b37a6-b691-4c0d-ac00-8e26b839b5ca-client-ca\") pod \"route-controller-manager-6f77dfc79b-b26gp\" (UID: \"db3b37a6-b691-4c0d-ac00-8e26b839b5ca\") " pod="openshift-route-controller-manager/route-controller-manager-6f77dfc79b-b26gp" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.639953 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db3b37a6-b691-4c0d-ac00-8e26b839b5ca-serving-cert\") pod \"route-controller-manager-6f77dfc79b-b26gp\" (UID: \"db3b37a6-b691-4c0d-ac00-8e26b839b5ca\") " pod="openshift-route-controller-manager/route-controller-manager-6f77dfc79b-b26gp" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.639972 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/360f4dc2-2dd6-4be1-b141-83ed024bf834-proxy-ca-bundles\") pod \"controller-manager-7766474774-mp9gv\" (UID: \"360f4dc2-2dd6-4be1-b141-83ed024bf834\") " pod="openshift-controller-manager/controller-manager-7766474774-mp9gv" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.639998 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/360f4dc2-2dd6-4be1-b141-83ed024bf834-config\") pod \"controller-manager-7766474774-mp9gv\" (UID: \"360f4dc2-2dd6-4be1-b141-83ed024bf834\") " pod="openshift-controller-manager/controller-manager-7766474774-mp9gv" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.640015 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfpvx\" (UniqueName: \"kubernetes.io/projected/db3b37a6-b691-4c0d-ac00-8e26b839b5ca-kube-api-access-vfpvx\") pod \"route-controller-manager-6f77dfc79b-b26gp\" (UID: \"db3b37a6-b691-4c0d-ac00-8e26b839b5ca\") " pod="openshift-route-controller-manager/route-controller-manager-6f77dfc79b-b26gp" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.640781 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/360f4dc2-2dd6-4be1-b141-83ed024bf834-client-ca\") pod \"controller-manager-7766474774-mp9gv\" (UID: \"360f4dc2-2dd6-4be1-b141-83ed024bf834\") " pod="openshift-controller-manager/controller-manager-7766474774-mp9gv" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.641340 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db3b37a6-b691-4c0d-ac00-8e26b839b5ca-client-ca\") pod \"route-controller-manager-6f77dfc79b-b26gp\" (UID: \"db3b37a6-b691-4c0d-ac00-8e26b839b5ca\") " pod="openshift-route-controller-manager/route-controller-manager-6f77dfc79b-b26gp" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.642072 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db3b37a6-b691-4c0d-ac00-8e26b839b5ca-config\") pod \"route-controller-manager-6f77dfc79b-b26gp\" (UID: \"db3b37a6-b691-4c0d-ac00-8e26b839b5ca\") " pod="openshift-route-controller-manager/route-controller-manager-6f77dfc79b-b26gp" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.643784 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/360f4dc2-2dd6-4be1-b141-83ed024bf834-proxy-ca-bundles\") pod \"controller-manager-7766474774-mp9gv\" (UID: \"360f4dc2-2dd6-4be1-b141-83ed024bf834\") " pod="openshift-controller-manager/controller-manager-7766474774-mp9gv" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.644387 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/360f4dc2-2dd6-4be1-b141-83ed024bf834-config\") pod \"controller-manager-7766474774-mp9gv\" (UID: \"360f4dc2-2dd6-4be1-b141-83ed024bf834\") " pod="openshift-controller-manager/controller-manager-7766474774-mp9gv" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.656421 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/360f4dc2-2dd6-4be1-b141-83ed024bf834-serving-cert\") pod \"controller-manager-7766474774-mp9gv\" (UID: \"360f4dc2-2dd6-4be1-b141-83ed024bf834\") " pod="openshift-controller-manager/controller-manager-7766474774-mp9gv" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.661807 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmrk4\" (UniqueName: \"kubernetes.io/projected/360f4dc2-2dd6-4be1-b141-83ed024bf834-kube-api-access-bmrk4\") pod \"controller-manager-7766474774-mp9gv\" (UID: \"360f4dc2-2dd6-4be1-b141-83ed024bf834\") " pod="openshift-controller-manager/controller-manager-7766474774-mp9gv" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.662864 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db3b37a6-b691-4c0d-ac00-8e26b839b5ca-serving-cert\") pod \"route-controller-manager-6f77dfc79b-b26gp\" (UID: \"db3b37a6-b691-4c0d-ac00-8e26b839b5ca\") " pod="openshift-route-controller-manager/route-controller-manager-6f77dfc79b-b26gp" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.670083 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfpvx\" (UniqueName: \"kubernetes.io/projected/db3b37a6-b691-4c0d-ac00-8e26b839b5ca-kube-api-access-vfpvx\") pod \"route-controller-manager-6f77dfc79b-b26gp\" (UID: \"db3b37a6-b691-4c0d-ac00-8e26b839b5ca\") " pod="openshift-route-controller-manager/route-controller-manager-6f77dfc79b-b26gp" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.752455 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lqtxs"] Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.813250 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7766474774-mp9gv" Jan 26 11:00:36 crc kubenswrapper[4619]: I0126 11:00:36.822599 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f77dfc79b-b26gp" Jan 26 11:00:37 crc kubenswrapper[4619]: I0126 11:00:37.101092 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7766474774-mp9gv"] Jan 26 11:00:37 crc kubenswrapper[4619]: I0126 11:00:37.186721 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f77dfc79b-b26gp"] Jan 26 11:00:37 crc kubenswrapper[4619]: W0126 11:00:37.189974 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb3b37a6_b691_4c0d_ac00_8e26b839b5ca.slice/crio-1bb928e0f5d893e0904ee36fd6f7bfa57c32d3064fafe97b6f3098c2482ffe41 WatchSource:0}: Error finding container 1bb928e0f5d893e0904ee36fd6f7bfa57c32d3064fafe97b6f3098c2482ffe41: Status 404 returned error can't find the container with id 1bb928e0f5d893e0904ee36fd6f7bfa57c32d3064fafe97b6f3098c2482ffe41 Jan 26 11:00:37 crc kubenswrapper[4619]: I0126 11:00:37.257059 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f77dfc79b-b26gp" event={"ID":"db3b37a6-b691-4c0d-ac00-8e26b839b5ca","Type":"ContainerStarted","Data":"1bb928e0f5d893e0904ee36fd6f7bfa57c32d3064fafe97b6f3098c2482ffe41"} Jan 26 11:00:37 crc kubenswrapper[4619]: I0126 11:00:37.258239 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7766474774-mp9gv" event={"ID":"360f4dc2-2dd6-4be1-b141-83ed024bf834","Type":"ContainerStarted","Data":"dbd203207e12bdc1c6a4c55172b484bc6e7e7b70fc46a15a36f703c38381a2ee"} Jan 26 11:00:37 crc kubenswrapper[4619]: I0126 11:00:37.259786 4619 generic.go:334] "Generic (PLEG): container finished" podID="330030a7-d5b2-44ba-8612-30cd6ff41451" containerID="25711f0ff69248895bc623d4b0097d73bb0fb0beecdbd12e09214896b631f554" exitCode=0 Jan 26 11:00:37 crc kubenswrapper[4619]: I0126 11:00:37.259873 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqtxs" event={"ID":"330030a7-d5b2-44ba-8612-30cd6ff41451","Type":"ContainerDied","Data":"25711f0ff69248895bc623d4b0097d73bb0fb0beecdbd12e09214896b631f554"} Jan 26 11:00:37 crc kubenswrapper[4619]: I0126 11:00:37.259910 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqtxs" event={"ID":"330030a7-d5b2-44ba-8612-30cd6ff41451","Type":"ContainerStarted","Data":"db1c34d323e4775aa65cf067c7a73cae46e3407f433c0856041b803a93998390"} Jan 26 11:00:37 crc kubenswrapper[4619]: I0126 11:00:37.270706 4619 generic.go:334] "Generic (PLEG): container finished" podID="2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b" containerID="78e4f9df6bafcdb26d0e2850eed7978c4225ede92ef9a099cbf6a90b148207da" exitCode=0 Jan 26 11:00:37 crc kubenswrapper[4619]: I0126 11:00:37.313163 4619 generic.go:334] "Generic (PLEG): container finished" podID="82722d3b-1952-40e3-87db-a9f4d4c9b83a" containerID="f8917b0da83d0f5f8fa6ab9d7331f997c8f930365af5e60cb42e9978f7764155" exitCode=0 Jan 26 11:00:37 crc kubenswrapper[4619]: I0126 11:00:37.315241 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41dc8f80-5742-4e4b-943e-571ad0e59027" path="/var/lib/kubelet/pods/41dc8f80-5742-4e4b-943e-571ad0e59027/volumes" Jan 26 11:00:37 crc kubenswrapper[4619]: I0126 11:00:37.320176 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bcc19ee-a154-482d-84f3-3c8aed73db25" path="/var/lib/kubelet/pods/5bcc19ee-a154-482d-84f3-3c8aed73db25/volumes" Jan 26 11:00:37 crc kubenswrapper[4619]: I0126 11:00:37.320732 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pkzzx" event={"ID":"2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b","Type":"ContainerDied","Data":"78e4f9df6bafcdb26d0e2850eed7978c4225ede92ef9a099cbf6a90b148207da"} Jan 26 11:00:37 crc kubenswrapper[4619]: I0126 11:00:37.320781 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngtbq" event={"ID":"82722d3b-1952-40e3-87db-a9f4d4c9b83a","Type":"ContainerDied","Data":"f8917b0da83d0f5f8fa6ab9d7331f997c8f930365af5e60cb42e9978f7764155"} Jan 26 11:00:37 crc kubenswrapper[4619]: I0126 11:00:37.602246 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fxj2j"] Jan 26 11:00:37 crc kubenswrapper[4619]: I0126 11:00:37.603329 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxj2j" Jan 26 11:00:37 crc kubenswrapper[4619]: I0126 11:00:37.609395 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 11:00:37 crc kubenswrapper[4619]: I0126 11:00:37.649142 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fxj2j"] Jan 26 11:00:37 crc kubenswrapper[4619]: I0126 11:00:37.653963 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7d685d9-1721-485a-b578-d56fa3c14d91-catalog-content\") pod \"certified-operators-fxj2j\" (UID: \"b7d685d9-1721-485a-b578-d56fa3c14d91\") " pod="openshift-marketplace/certified-operators-fxj2j" Jan 26 11:00:37 crc kubenswrapper[4619]: I0126 11:00:37.654039 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xgxb\" (UniqueName: \"kubernetes.io/projected/b7d685d9-1721-485a-b578-d56fa3c14d91-kube-api-access-5xgxb\") pod \"certified-operators-fxj2j\" (UID: \"b7d685d9-1721-485a-b578-d56fa3c14d91\") " pod="openshift-marketplace/certified-operators-fxj2j" Jan 26 11:00:37 crc kubenswrapper[4619]: I0126 11:00:37.654166 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7d685d9-1721-485a-b578-d56fa3c14d91-utilities\") pod \"certified-operators-fxj2j\" (UID: \"b7d685d9-1721-485a-b578-d56fa3c14d91\") " pod="openshift-marketplace/certified-operators-fxj2j" Jan 26 11:00:37 crc kubenswrapper[4619]: I0126 11:00:37.755754 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xgxb\" (UniqueName: \"kubernetes.io/projected/b7d685d9-1721-485a-b578-d56fa3c14d91-kube-api-access-5xgxb\") pod \"certified-operators-fxj2j\" (UID: \"b7d685d9-1721-485a-b578-d56fa3c14d91\") " pod="openshift-marketplace/certified-operators-fxj2j" Jan 26 11:00:37 crc kubenswrapper[4619]: I0126 11:00:37.756144 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7d685d9-1721-485a-b578-d56fa3c14d91-utilities\") pod \"certified-operators-fxj2j\" (UID: \"b7d685d9-1721-485a-b578-d56fa3c14d91\") " pod="openshift-marketplace/certified-operators-fxj2j" Jan 26 11:00:37 crc kubenswrapper[4619]: I0126 11:00:37.756176 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7d685d9-1721-485a-b578-d56fa3c14d91-catalog-content\") pod \"certified-operators-fxj2j\" (UID: \"b7d685d9-1721-485a-b578-d56fa3c14d91\") " pod="openshift-marketplace/certified-operators-fxj2j" Jan 26 11:00:37 crc kubenswrapper[4619]: I0126 11:00:37.756583 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7d685d9-1721-485a-b578-d56fa3c14d91-utilities\") pod \"certified-operators-fxj2j\" (UID: \"b7d685d9-1721-485a-b578-d56fa3c14d91\") " pod="openshift-marketplace/certified-operators-fxj2j" Jan 26 11:00:37 crc kubenswrapper[4619]: I0126 11:00:37.756755 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7d685d9-1721-485a-b578-d56fa3c14d91-catalog-content\") pod \"certified-operators-fxj2j\" (UID: \"b7d685d9-1721-485a-b578-d56fa3c14d91\") " pod="openshift-marketplace/certified-operators-fxj2j" Jan 26 11:00:37 crc kubenswrapper[4619]: I0126 11:00:37.783922 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xgxb\" (UniqueName: \"kubernetes.io/projected/b7d685d9-1721-485a-b578-d56fa3c14d91-kube-api-access-5xgxb\") pod \"certified-operators-fxj2j\" (UID: \"b7d685d9-1721-485a-b578-d56fa3c14d91\") " pod="openshift-marketplace/certified-operators-fxj2j" Jan 26 11:00:37 crc kubenswrapper[4619]: I0126 11:00:37.917015 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxj2j" Jan 26 11:00:38 crc kubenswrapper[4619]: I0126 11:00:38.334668 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f77dfc79b-b26gp" event={"ID":"db3b37a6-b691-4c0d-ac00-8e26b839b5ca","Type":"ContainerStarted","Data":"e03aab3b9089e5822f7919aa7576af2792b7c6f8ae7eed6afd79092ceb37a853"} Jan 26 11:00:38 crc kubenswrapper[4619]: I0126 11:00:38.335016 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6f77dfc79b-b26gp" Jan 26 11:00:38 crc kubenswrapper[4619]: I0126 11:00:38.338602 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7766474774-mp9gv" event={"ID":"360f4dc2-2dd6-4be1-b141-83ed024bf834","Type":"ContainerStarted","Data":"06d829f959ba64dcb5bdd8ba579881a429b135362fa36db6cec5149632a0a182"} Jan 26 11:00:38 crc kubenswrapper[4619]: I0126 11:00:38.338803 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7766474774-mp9gv" Jan 26 11:00:38 crc kubenswrapper[4619]: I0126 11:00:38.340318 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqtxs" event={"ID":"330030a7-d5b2-44ba-8612-30cd6ff41451","Type":"ContainerStarted","Data":"8910fd7227232621d269b969684b356a022a9a5d916c00143c3d078f255c8e91"} Jan 26 11:00:38 crc kubenswrapper[4619]: I0126 11:00:38.343681 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6f77dfc79b-b26gp" Jan 26 11:00:38 crc kubenswrapper[4619]: I0126 11:00:38.349736 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pkzzx" event={"ID":"2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b","Type":"ContainerStarted","Data":"a22f0bee25610f29b94bc481f24f18f48c1914d0a839579696449f317ee4516b"} Jan 26 11:00:38 crc kubenswrapper[4619]: I0126 11:00:38.358357 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7766474774-mp9gv" Jan 26 11:00:38 crc kubenswrapper[4619]: I0126 11:00:38.359327 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngtbq" event={"ID":"82722d3b-1952-40e3-87db-a9f4d4c9b83a","Type":"ContainerStarted","Data":"0e80e14accd9c3e30f1e0412a181f97d8598420e12d43fb926a4a8997640a46c"} Jan 26 11:00:38 crc kubenswrapper[4619]: I0126 11:00:38.409945 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6f77dfc79b-b26gp" podStartSLOduration=4.4099236 podStartE2EDuration="4.4099236s" podCreationTimestamp="2026-01-26 11:00:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:00:38.379212504 +0000 UTC m=+337.413253240" watchObservedRunningTime="2026-01-26 11:00:38.4099236 +0000 UTC m=+337.443964316" Jan 26 11:00:38 crc kubenswrapper[4619]: I0126 11:00:38.455519 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ngtbq" podStartSLOduration=2.809796143 podStartE2EDuration="5.455498836s" podCreationTimestamp="2026-01-26 11:00:33 +0000 UTC" firstStartedPulling="2026-01-26 11:00:35.231811851 +0000 UTC m=+334.265852567" lastFinishedPulling="2026-01-26 11:00:37.877514544 +0000 UTC m=+336.911555260" observedRunningTime="2026-01-26 11:00:38.446825666 +0000 UTC m=+337.480866382" watchObservedRunningTime="2026-01-26 11:00:38.455498836 +0000 UTC m=+337.489539542" Jan 26 11:00:38 crc kubenswrapper[4619]: I0126 11:00:38.619435 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fxj2j"] Jan 26 11:00:38 crc kubenswrapper[4619]: I0126 11:00:38.619977 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7766474774-mp9gv" podStartSLOduration=4.619955986 podStartE2EDuration="4.619955986s" podCreationTimestamp="2026-01-26 11:00:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:00:38.559937012 +0000 UTC m=+337.593977728" watchObservedRunningTime="2026-01-26 11:00:38.619955986 +0000 UTC m=+337.653996702" Jan 26 11:00:38 crc kubenswrapper[4619]: I0126 11:00:38.648000 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pkzzx" podStartSLOduration=2.16407699 podStartE2EDuration="3.647980438s" podCreationTimestamp="2026-01-26 11:00:35 +0000 UTC" firstStartedPulling="2026-01-26 11:00:36.232788145 +0000 UTC m=+335.266828851" lastFinishedPulling="2026-01-26 11:00:37.716691583 +0000 UTC m=+336.750732299" observedRunningTime="2026-01-26 11:00:38.644345637 +0000 UTC m=+337.678386373" watchObservedRunningTime="2026-01-26 11:00:38.647980438 +0000 UTC m=+337.682021144" Jan 26 11:00:39 crc kubenswrapper[4619]: I0126 11:00:39.365652 4619 generic.go:334] "Generic (PLEG): container finished" podID="b7d685d9-1721-485a-b578-d56fa3c14d91" containerID="bc6c80e13abfb6cfabc6b840e6af18db9f1d041b509a270665d9e989e300b6c8" exitCode=0 Jan 26 11:00:39 crc kubenswrapper[4619]: I0126 11:00:39.365829 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxj2j" event={"ID":"b7d685d9-1721-485a-b578-d56fa3c14d91","Type":"ContainerDied","Data":"bc6c80e13abfb6cfabc6b840e6af18db9f1d041b509a270665d9e989e300b6c8"} Jan 26 11:00:39 crc kubenswrapper[4619]: I0126 11:00:39.366021 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxj2j" event={"ID":"b7d685d9-1721-485a-b578-d56fa3c14d91","Type":"ContainerStarted","Data":"9197a6484b6651bded5ff48a38ef761266e3f0a6771f2a1d03c0fe7acb149465"} Jan 26 11:00:39 crc kubenswrapper[4619]: I0126 11:00:39.368502 4619 generic.go:334] "Generic (PLEG): container finished" podID="330030a7-d5b2-44ba-8612-30cd6ff41451" containerID="8910fd7227232621d269b969684b356a022a9a5d916c00143c3d078f255c8e91" exitCode=0 Jan 26 11:00:39 crc kubenswrapper[4619]: I0126 11:00:39.368735 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqtxs" event={"ID":"330030a7-d5b2-44ba-8612-30cd6ff41451","Type":"ContainerDied","Data":"8910fd7227232621d269b969684b356a022a9a5d916c00143c3d078f255c8e91"} Jan 26 11:00:40 crc kubenswrapper[4619]: I0126 11:00:40.376151 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqtxs" event={"ID":"330030a7-d5b2-44ba-8612-30cd6ff41451","Type":"ContainerStarted","Data":"2b21c283e5375b1ebbde08fc468940ffa3ccfdc12de5ca78d5218cab1e589f23"} Jan 26 11:00:40 crc kubenswrapper[4619]: I0126 11:00:40.378271 4619 generic.go:334] "Generic (PLEG): container finished" podID="b7d685d9-1721-485a-b578-d56fa3c14d91" containerID="d6c1f33be16b4066d08e8d435af4ff51aae1f68a9da600749246bb555315aaca" exitCode=0 Jan 26 11:00:40 crc kubenswrapper[4619]: I0126 11:00:40.378455 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxj2j" event={"ID":"b7d685d9-1721-485a-b578-d56fa3c14d91","Type":"ContainerDied","Data":"d6c1f33be16b4066d08e8d435af4ff51aae1f68a9da600749246bb555315aaca"} Jan 26 11:00:40 crc kubenswrapper[4619]: I0126 11:00:40.395540 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lqtxs" podStartSLOduration=1.839959699 podStartE2EDuration="4.395518208s" podCreationTimestamp="2026-01-26 11:00:36 +0000 UTC" firstStartedPulling="2026-01-26 11:00:37.26136595 +0000 UTC m=+336.295406666" lastFinishedPulling="2026-01-26 11:00:39.816924459 +0000 UTC m=+338.850965175" observedRunningTime="2026-01-26 11:00:40.393818711 +0000 UTC m=+339.427859427" watchObservedRunningTime="2026-01-26 11:00:40.395518208 +0000 UTC m=+339.429558924" Jan 26 11:00:41 crc kubenswrapper[4619]: I0126 11:00:41.385734 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxj2j" event={"ID":"b7d685d9-1721-485a-b578-d56fa3c14d91","Type":"ContainerStarted","Data":"9d1978339aa368759c1e46665b751fe6ec4b543e7e4db33ad2f34ff3ef761217"} Jan 26 11:00:41 crc kubenswrapper[4619]: I0126 11:00:41.419332 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fxj2j" podStartSLOduration=2.969154803 podStartE2EDuration="4.419313391s" podCreationTimestamp="2026-01-26 11:00:37 +0000 UTC" firstStartedPulling="2026-01-26 11:00:39.368369513 +0000 UTC m=+338.402410229" lastFinishedPulling="2026-01-26 11:00:40.818528101 +0000 UTC m=+339.852568817" observedRunningTime="2026-01-26 11:00:41.416581516 +0000 UTC m=+340.450622232" watchObservedRunningTime="2026-01-26 11:00:41.419313391 +0000 UTC m=+340.453354107" Jan 26 11:00:44 crc kubenswrapper[4619]: I0126 11:00:44.120185 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ngtbq" Jan 26 11:00:44 crc kubenswrapper[4619]: I0126 11:00:44.120253 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ngtbq" Jan 26 11:00:44 crc kubenswrapper[4619]: I0126 11:00:44.169887 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ngtbq" Jan 26 11:00:44 crc kubenswrapper[4619]: I0126 11:00:44.444958 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ngtbq" Jan 26 11:00:45 crc kubenswrapper[4619]: I0126 11:00:45.603053 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pkzzx" Jan 26 11:00:45 crc kubenswrapper[4619]: I0126 11:00:45.603679 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pkzzx" Jan 26 11:00:45 crc kubenswrapper[4619]: I0126 11:00:45.653117 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pkzzx" Jan 26 11:00:46 crc kubenswrapper[4619]: I0126 11:00:46.445276 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pkzzx" Jan 26 11:00:46 crc kubenswrapper[4619]: I0126 11:00:46.509863 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lqtxs" Jan 26 11:00:46 crc kubenswrapper[4619]: I0126 11:00:46.509926 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lqtxs" Jan 26 11:00:46 crc kubenswrapper[4619]: I0126 11:00:46.549280 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lqtxs" Jan 26 11:00:47 crc kubenswrapper[4619]: I0126 11:00:47.461555 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lqtxs" Jan 26 11:00:47 crc kubenswrapper[4619]: I0126 11:00:47.917605 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fxj2j" Jan 26 11:00:47 crc kubenswrapper[4619]: I0126 11:00:47.917703 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fxj2j" Jan 26 11:00:47 crc kubenswrapper[4619]: I0126 11:00:47.959887 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fxj2j" Jan 26 11:00:48 crc kubenswrapper[4619]: I0126 11:00:48.472144 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fxj2j" Jan 26 11:01:04 crc kubenswrapper[4619]: I0126 11:01:04.761492 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7766474774-mp9gv"] Jan 26 11:01:04 crc kubenswrapper[4619]: I0126 11:01:04.762537 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7766474774-mp9gv" podUID="360f4dc2-2dd6-4be1-b141-83ed024bf834" containerName="controller-manager" containerID="cri-o://06d829f959ba64dcb5bdd8ba579881a429b135362fa36db6cec5149632a0a182" gracePeriod=30 Jan 26 11:01:04 crc kubenswrapper[4619]: I0126 11:01:04.850564 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f77dfc79b-b26gp"] Jan 26 11:01:04 crc kubenswrapper[4619]: I0126 11:01:04.850768 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6f77dfc79b-b26gp" podUID="db3b37a6-b691-4c0d-ac00-8e26b839b5ca" containerName="route-controller-manager" containerID="cri-o://e03aab3b9089e5822f7919aa7576af2792b7c6f8ae7eed6afd79092ceb37a853" gracePeriod=30 Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.283662 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7766474774-mp9gv" Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.293807 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f77dfc79b-b26gp" Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.328217 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfpvx\" (UniqueName: \"kubernetes.io/projected/db3b37a6-b691-4c0d-ac00-8e26b839b5ca-kube-api-access-vfpvx\") pod \"db3b37a6-b691-4c0d-ac00-8e26b839b5ca\" (UID: \"db3b37a6-b691-4c0d-ac00-8e26b839b5ca\") " Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.328279 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/360f4dc2-2dd6-4be1-b141-83ed024bf834-client-ca\") pod \"360f4dc2-2dd6-4be1-b141-83ed024bf834\" (UID: \"360f4dc2-2dd6-4be1-b141-83ed024bf834\") " Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.328363 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/360f4dc2-2dd6-4be1-b141-83ed024bf834-serving-cert\") pod \"360f4dc2-2dd6-4be1-b141-83ed024bf834\" (UID: \"360f4dc2-2dd6-4be1-b141-83ed024bf834\") " Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.328407 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmrk4\" (UniqueName: \"kubernetes.io/projected/360f4dc2-2dd6-4be1-b141-83ed024bf834-kube-api-access-bmrk4\") pod \"360f4dc2-2dd6-4be1-b141-83ed024bf834\" (UID: \"360f4dc2-2dd6-4be1-b141-83ed024bf834\") " Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.328437 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db3b37a6-b691-4c0d-ac00-8e26b839b5ca-serving-cert\") pod \"db3b37a6-b691-4c0d-ac00-8e26b839b5ca\" (UID: \"db3b37a6-b691-4c0d-ac00-8e26b839b5ca\") " Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.328470 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db3b37a6-b691-4c0d-ac00-8e26b839b5ca-config\") pod \"db3b37a6-b691-4c0d-ac00-8e26b839b5ca\" (UID: \"db3b37a6-b691-4c0d-ac00-8e26b839b5ca\") " Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.329189 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/360f4dc2-2dd6-4be1-b141-83ed024bf834-client-ca" (OuterVolumeSpecName: "client-ca") pod "360f4dc2-2dd6-4be1-b141-83ed024bf834" (UID: "360f4dc2-2dd6-4be1-b141-83ed024bf834"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.329368 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/360f4dc2-2dd6-4be1-b141-83ed024bf834-config\") pod \"360f4dc2-2dd6-4be1-b141-83ed024bf834\" (UID: \"360f4dc2-2dd6-4be1-b141-83ed024bf834\") " Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.329410 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/360f4dc2-2dd6-4be1-b141-83ed024bf834-proxy-ca-bundles\") pod \"360f4dc2-2dd6-4be1-b141-83ed024bf834\" (UID: \"360f4dc2-2dd6-4be1-b141-83ed024bf834\") " Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.329461 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db3b37a6-b691-4c0d-ac00-8e26b839b5ca-client-ca\") pod \"db3b37a6-b691-4c0d-ac00-8e26b839b5ca\" (UID: \"db3b37a6-b691-4c0d-ac00-8e26b839b5ca\") " Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.329796 4619 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/360f4dc2-2dd6-4be1-b141-83ed024bf834-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.329995 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/360f4dc2-2dd6-4be1-b141-83ed024bf834-config" (OuterVolumeSpecName: "config") pod "360f4dc2-2dd6-4be1-b141-83ed024bf834" (UID: "360f4dc2-2dd6-4be1-b141-83ed024bf834"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.329989 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db3b37a6-b691-4c0d-ac00-8e26b839b5ca-config" (OuterVolumeSpecName: "config") pod "db3b37a6-b691-4c0d-ac00-8e26b839b5ca" (UID: "db3b37a6-b691-4c0d-ac00-8e26b839b5ca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.330683 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/360f4dc2-2dd6-4be1-b141-83ed024bf834-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "360f4dc2-2dd6-4be1-b141-83ed024bf834" (UID: "360f4dc2-2dd6-4be1-b141-83ed024bf834"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.330784 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db3b37a6-b691-4c0d-ac00-8e26b839b5ca-client-ca" (OuterVolumeSpecName: "client-ca") pod "db3b37a6-b691-4c0d-ac00-8e26b839b5ca" (UID: "db3b37a6-b691-4c0d-ac00-8e26b839b5ca"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.333762 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/360f4dc2-2dd6-4be1-b141-83ed024bf834-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "360f4dc2-2dd6-4be1-b141-83ed024bf834" (UID: "360f4dc2-2dd6-4be1-b141-83ed024bf834"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.334028 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/360f4dc2-2dd6-4be1-b141-83ed024bf834-kube-api-access-bmrk4" (OuterVolumeSpecName: "kube-api-access-bmrk4") pod "360f4dc2-2dd6-4be1-b141-83ed024bf834" (UID: "360f4dc2-2dd6-4be1-b141-83ed024bf834"). InnerVolumeSpecName "kube-api-access-bmrk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.334116 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db3b37a6-b691-4c0d-ac00-8e26b839b5ca-kube-api-access-vfpvx" (OuterVolumeSpecName: "kube-api-access-vfpvx") pod "db3b37a6-b691-4c0d-ac00-8e26b839b5ca" (UID: "db3b37a6-b691-4c0d-ac00-8e26b839b5ca"). InnerVolumeSpecName "kube-api-access-vfpvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.334518 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db3b37a6-b691-4c0d-ac00-8e26b839b5ca-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "db3b37a6-b691-4c0d-ac00-8e26b839b5ca" (UID: "db3b37a6-b691-4c0d-ac00-8e26b839b5ca"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.431343 4619 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/360f4dc2-2dd6-4be1-b141-83ed024bf834-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.431387 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmrk4\" (UniqueName: \"kubernetes.io/projected/360f4dc2-2dd6-4be1-b141-83ed024bf834-kube-api-access-bmrk4\") on node \"crc\" DevicePath \"\"" Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.431397 4619 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db3b37a6-b691-4c0d-ac00-8e26b839b5ca-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.431407 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db3b37a6-b691-4c0d-ac00-8e26b839b5ca-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.431418 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/360f4dc2-2dd6-4be1-b141-83ed024bf834-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.431426 4619 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/360f4dc2-2dd6-4be1-b141-83ed024bf834-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.431434 4619 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db3b37a6-b691-4c0d-ac00-8e26b839b5ca-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.431442 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfpvx\" (UniqueName: \"kubernetes.io/projected/db3b37a6-b691-4c0d-ac00-8e26b839b5ca-kube-api-access-vfpvx\") on node \"crc\" DevicePath \"\"" Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.522653 4619 generic.go:334] "Generic (PLEG): container finished" podID="db3b37a6-b691-4c0d-ac00-8e26b839b5ca" containerID="e03aab3b9089e5822f7919aa7576af2792b7c6f8ae7eed6afd79092ceb37a853" exitCode=0 Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.522764 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f77dfc79b-b26gp" event={"ID":"db3b37a6-b691-4c0d-ac00-8e26b839b5ca","Type":"ContainerDied","Data":"e03aab3b9089e5822f7919aa7576af2792b7c6f8ae7eed6afd79092ceb37a853"} Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.522815 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f77dfc79b-b26gp" event={"ID":"db3b37a6-b691-4c0d-ac00-8e26b839b5ca","Type":"ContainerDied","Data":"1bb928e0f5d893e0904ee36fd6f7bfa57c32d3064fafe97b6f3098c2482ffe41"} Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.522853 4619 scope.go:117] "RemoveContainer" containerID="e03aab3b9089e5822f7919aa7576af2792b7c6f8ae7eed6afd79092ceb37a853" Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.523061 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f77dfc79b-b26gp" Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.529339 4619 generic.go:334] "Generic (PLEG): container finished" podID="360f4dc2-2dd6-4be1-b141-83ed024bf834" containerID="06d829f959ba64dcb5bdd8ba579881a429b135362fa36db6cec5149632a0a182" exitCode=0 Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.529400 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7766474774-mp9gv" event={"ID":"360f4dc2-2dd6-4be1-b141-83ed024bf834","Type":"ContainerDied","Data":"06d829f959ba64dcb5bdd8ba579881a429b135362fa36db6cec5149632a0a182"} Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.529423 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7766474774-mp9gv" Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.529441 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7766474774-mp9gv" event={"ID":"360f4dc2-2dd6-4be1-b141-83ed024bf834","Type":"ContainerDied","Data":"dbd203207e12bdc1c6a4c55172b484bc6e7e7b70fc46a15a36f703c38381a2ee"} Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.549123 4619 scope.go:117] "RemoveContainer" containerID="e03aab3b9089e5822f7919aa7576af2792b7c6f8ae7eed6afd79092ceb37a853" Jan 26 11:01:05 crc kubenswrapper[4619]: E0126 11:01:05.549986 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e03aab3b9089e5822f7919aa7576af2792b7c6f8ae7eed6afd79092ceb37a853\": container with ID starting with e03aab3b9089e5822f7919aa7576af2792b7c6f8ae7eed6afd79092ceb37a853 not found: ID does not exist" containerID="e03aab3b9089e5822f7919aa7576af2792b7c6f8ae7eed6afd79092ceb37a853" Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.550034 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e03aab3b9089e5822f7919aa7576af2792b7c6f8ae7eed6afd79092ceb37a853"} err="failed to get container status \"e03aab3b9089e5822f7919aa7576af2792b7c6f8ae7eed6afd79092ceb37a853\": rpc error: code = NotFound desc = could not find container \"e03aab3b9089e5822f7919aa7576af2792b7c6f8ae7eed6afd79092ceb37a853\": container with ID starting with e03aab3b9089e5822f7919aa7576af2792b7c6f8ae7eed6afd79092ceb37a853 not found: ID does not exist" Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.550065 4619 scope.go:117] "RemoveContainer" containerID="06d829f959ba64dcb5bdd8ba579881a429b135362fa36db6cec5149632a0a182" Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.568582 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7766474774-mp9gv"] Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.571593 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7766474774-mp9gv"] Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.572202 4619 scope.go:117] "RemoveContainer" containerID="06d829f959ba64dcb5bdd8ba579881a429b135362fa36db6cec5149632a0a182" Jan 26 11:01:05 crc kubenswrapper[4619]: E0126 11:01:05.573644 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06d829f959ba64dcb5bdd8ba579881a429b135362fa36db6cec5149632a0a182\": container with ID starting with 06d829f959ba64dcb5bdd8ba579881a429b135362fa36db6cec5149632a0a182 not found: ID does not exist" containerID="06d829f959ba64dcb5bdd8ba579881a429b135362fa36db6cec5149632a0a182" Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.573693 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06d829f959ba64dcb5bdd8ba579881a429b135362fa36db6cec5149632a0a182"} err="failed to get container status \"06d829f959ba64dcb5bdd8ba579881a429b135362fa36db6cec5149632a0a182\": rpc error: code = NotFound desc = could not find container \"06d829f959ba64dcb5bdd8ba579881a429b135362fa36db6cec5149632a0a182\": container with ID starting with 06d829f959ba64dcb5bdd8ba579881a429b135362fa36db6cec5149632a0a182 not found: ID does not exist" Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.578089 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f77dfc79b-b26gp"] Jan 26 11:01:05 crc kubenswrapper[4619]: I0126 11:01:05.582016 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f77dfc79b-b26gp"] Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.486860 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6d88cb4b77-qhh6x"] Jan 26 11:01:06 crc kubenswrapper[4619]: E0126 11:01:06.487465 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="360f4dc2-2dd6-4be1-b141-83ed024bf834" containerName="controller-manager" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.487483 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="360f4dc2-2dd6-4be1-b141-83ed024bf834" containerName="controller-manager" Jan 26 11:01:06 crc kubenswrapper[4619]: E0126 11:01:06.487507 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db3b37a6-b691-4c0d-ac00-8e26b839b5ca" containerName="route-controller-manager" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.487515 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="db3b37a6-b691-4c0d-ac00-8e26b839b5ca" containerName="route-controller-manager" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.487644 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="360f4dc2-2dd6-4be1-b141-83ed024bf834" containerName="controller-manager" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.487660 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="db3b37a6-b691-4c0d-ac00-8e26b839b5ca" containerName="route-controller-manager" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.488084 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d88cb4b77-qhh6x" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.489760 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-577b5545c5-hdvhl"] Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.490362 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-577b5545c5-hdvhl" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.495278 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.496474 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.503175 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.503410 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.503580 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.503870 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.504017 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.504050 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.504179 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.504266 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.505700 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.511807 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.537581 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.543760 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e84e1b2-ac73-4d6c-a004-f996b877a730-serving-cert\") pod \"controller-manager-6d88cb4b77-qhh6x\" (UID: \"7e84e1b2-ac73-4d6c-a004-f996b877a730\") " pod="openshift-controller-manager/controller-manager-6d88cb4b77-qhh6x" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.543828 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crp4c\" (UniqueName: \"kubernetes.io/projected/173a5123-d2b9-45f1-9b70-8b3a80a6bd71-kube-api-access-crp4c\") pod \"route-controller-manager-577b5545c5-hdvhl\" (UID: \"173a5123-d2b9-45f1-9b70-8b3a80a6bd71\") " pod="openshift-route-controller-manager/route-controller-manager-577b5545c5-hdvhl" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.543869 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e84e1b2-ac73-4d6c-a004-f996b877a730-proxy-ca-bundles\") pod \"controller-manager-6d88cb4b77-qhh6x\" (UID: \"7e84e1b2-ac73-4d6c-a004-f996b877a730\") " pod="openshift-controller-manager/controller-manager-6d88cb4b77-qhh6x" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.543891 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2cch\" (UniqueName: \"kubernetes.io/projected/7e84e1b2-ac73-4d6c-a004-f996b877a730-kube-api-access-n2cch\") pod \"controller-manager-6d88cb4b77-qhh6x\" (UID: \"7e84e1b2-ac73-4d6c-a004-f996b877a730\") " pod="openshift-controller-manager/controller-manager-6d88cb4b77-qhh6x" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.543930 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e84e1b2-ac73-4d6c-a004-f996b877a730-config\") pod \"controller-manager-6d88cb4b77-qhh6x\" (UID: \"7e84e1b2-ac73-4d6c-a004-f996b877a730\") " pod="openshift-controller-manager/controller-manager-6d88cb4b77-qhh6x" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.543964 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/173a5123-d2b9-45f1-9b70-8b3a80a6bd71-client-ca\") pod \"route-controller-manager-577b5545c5-hdvhl\" (UID: \"173a5123-d2b9-45f1-9b70-8b3a80a6bd71\") " pod="openshift-route-controller-manager/route-controller-manager-577b5545c5-hdvhl" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.544003 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/173a5123-d2b9-45f1-9b70-8b3a80a6bd71-serving-cert\") pod \"route-controller-manager-577b5545c5-hdvhl\" (UID: \"173a5123-d2b9-45f1-9b70-8b3a80a6bd71\") " pod="openshift-route-controller-manager/route-controller-manager-577b5545c5-hdvhl" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.544031 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e84e1b2-ac73-4d6c-a004-f996b877a730-client-ca\") pod \"controller-manager-6d88cb4b77-qhh6x\" (UID: \"7e84e1b2-ac73-4d6c-a004-f996b877a730\") " pod="openshift-controller-manager/controller-manager-6d88cb4b77-qhh6x" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.544060 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/173a5123-d2b9-45f1-9b70-8b3a80a6bd71-config\") pod \"route-controller-manager-577b5545c5-hdvhl\" (UID: \"173a5123-d2b9-45f1-9b70-8b3a80a6bd71\") " pod="openshift-route-controller-manager/route-controller-manager-577b5545c5-hdvhl" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.545349 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-577b5545c5-hdvhl"] Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.550262 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d88cb4b77-qhh6x"] Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.646104 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/173a5123-d2b9-45f1-9b70-8b3a80a6bd71-config\") pod \"route-controller-manager-577b5545c5-hdvhl\" (UID: \"173a5123-d2b9-45f1-9b70-8b3a80a6bd71\") " pod="openshift-route-controller-manager/route-controller-manager-577b5545c5-hdvhl" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.646211 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e84e1b2-ac73-4d6c-a004-f996b877a730-serving-cert\") pod \"controller-manager-6d88cb4b77-qhh6x\" (UID: \"7e84e1b2-ac73-4d6c-a004-f996b877a730\") " pod="openshift-controller-manager/controller-manager-6d88cb4b77-qhh6x" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.646251 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crp4c\" (UniqueName: \"kubernetes.io/projected/173a5123-d2b9-45f1-9b70-8b3a80a6bd71-kube-api-access-crp4c\") pod \"route-controller-manager-577b5545c5-hdvhl\" (UID: \"173a5123-d2b9-45f1-9b70-8b3a80a6bd71\") " pod="openshift-route-controller-manager/route-controller-manager-577b5545c5-hdvhl" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.646280 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e84e1b2-ac73-4d6c-a004-f996b877a730-proxy-ca-bundles\") pod \"controller-manager-6d88cb4b77-qhh6x\" (UID: \"7e84e1b2-ac73-4d6c-a004-f996b877a730\") " pod="openshift-controller-manager/controller-manager-6d88cb4b77-qhh6x" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.646300 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2cch\" (UniqueName: \"kubernetes.io/projected/7e84e1b2-ac73-4d6c-a004-f996b877a730-kube-api-access-n2cch\") pod \"controller-manager-6d88cb4b77-qhh6x\" (UID: \"7e84e1b2-ac73-4d6c-a004-f996b877a730\") " pod="openshift-controller-manager/controller-manager-6d88cb4b77-qhh6x" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.646333 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e84e1b2-ac73-4d6c-a004-f996b877a730-config\") pod \"controller-manager-6d88cb4b77-qhh6x\" (UID: \"7e84e1b2-ac73-4d6c-a004-f996b877a730\") " pod="openshift-controller-manager/controller-manager-6d88cb4b77-qhh6x" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.646768 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/173a5123-d2b9-45f1-9b70-8b3a80a6bd71-client-ca\") pod \"route-controller-manager-577b5545c5-hdvhl\" (UID: \"173a5123-d2b9-45f1-9b70-8b3a80a6bd71\") " pod="openshift-route-controller-manager/route-controller-manager-577b5545c5-hdvhl" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.646802 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/173a5123-d2b9-45f1-9b70-8b3a80a6bd71-serving-cert\") pod \"route-controller-manager-577b5545c5-hdvhl\" (UID: \"173a5123-d2b9-45f1-9b70-8b3a80a6bd71\") " pod="openshift-route-controller-manager/route-controller-manager-577b5545c5-hdvhl" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.646824 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e84e1b2-ac73-4d6c-a004-f996b877a730-client-ca\") pod \"controller-manager-6d88cb4b77-qhh6x\" (UID: \"7e84e1b2-ac73-4d6c-a004-f996b877a730\") " pod="openshift-controller-manager/controller-manager-6d88cb4b77-qhh6x" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.647980 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e84e1b2-ac73-4d6c-a004-f996b877a730-client-ca\") pod \"controller-manager-6d88cb4b77-qhh6x\" (UID: \"7e84e1b2-ac73-4d6c-a004-f996b877a730\") " pod="openshift-controller-manager/controller-manager-6d88cb4b77-qhh6x" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.648182 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e84e1b2-ac73-4d6c-a004-f996b877a730-config\") pod \"controller-manager-6d88cb4b77-qhh6x\" (UID: \"7e84e1b2-ac73-4d6c-a004-f996b877a730\") " pod="openshift-controller-manager/controller-manager-6d88cb4b77-qhh6x" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.649721 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/173a5123-d2b9-45f1-9b70-8b3a80a6bd71-client-ca\") pod \"route-controller-manager-577b5545c5-hdvhl\" (UID: \"173a5123-d2b9-45f1-9b70-8b3a80a6bd71\") " pod="openshift-route-controller-manager/route-controller-manager-577b5545c5-hdvhl" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.650169 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/173a5123-d2b9-45f1-9b70-8b3a80a6bd71-config\") pod \"route-controller-manager-577b5545c5-hdvhl\" (UID: \"173a5123-d2b9-45f1-9b70-8b3a80a6bd71\") " pod="openshift-route-controller-manager/route-controller-manager-577b5545c5-hdvhl" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.651710 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e84e1b2-ac73-4d6c-a004-f996b877a730-serving-cert\") pod \"controller-manager-6d88cb4b77-qhh6x\" (UID: \"7e84e1b2-ac73-4d6c-a004-f996b877a730\") " pod="openshift-controller-manager/controller-manager-6d88cb4b77-qhh6x" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.651943 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e84e1b2-ac73-4d6c-a004-f996b877a730-proxy-ca-bundles\") pod \"controller-manager-6d88cb4b77-qhh6x\" (UID: \"7e84e1b2-ac73-4d6c-a004-f996b877a730\") " pod="openshift-controller-manager/controller-manager-6d88cb4b77-qhh6x" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.654975 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/173a5123-d2b9-45f1-9b70-8b3a80a6bd71-serving-cert\") pod \"route-controller-manager-577b5545c5-hdvhl\" (UID: \"173a5123-d2b9-45f1-9b70-8b3a80a6bd71\") " pod="openshift-route-controller-manager/route-controller-manager-577b5545c5-hdvhl" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.665007 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crp4c\" (UniqueName: \"kubernetes.io/projected/173a5123-d2b9-45f1-9b70-8b3a80a6bd71-kube-api-access-crp4c\") pod \"route-controller-manager-577b5545c5-hdvhl\" (UID: \"173a5123-d2b9-45f1-9b70-8b3a80a6bd71\") " pod="openshift-route-controller-manager/route-controller-manager-577b5545c5-hdvhl" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.667270 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2cch\" (UniqueName: \"kubernetes.io/projected/7e84e1b2-ac73-4d6c-a004-f996b877a730-kube-api-access-n2cch\") pod \"controller-manager-6d88cb4b77-qhh6x\" (UID: \"7e84e1b2-ac73-4d6c-a004-f996b877a730\") " pod="openshift-controller-manager/controller-manager-6d88cb4b77-qhh6x" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.826408 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d88cb4b77-qhh6x" Jan 26 11:01:06 crc kubenswrapper[4619]: I0126 11:01:06.857755 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-577b5545c5-hdvhl" Jan 26 11:01:07 crc kubenswrapper[4619]: I0126 11:01:07.184089 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d88cb4b77-qhh6x"] Jan 26 11:01:07 crc kubenswrapper[4619]: I0126 11:01:07.267993 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="360f4dc2-2dd6-4be1-b141-83ed024bf834" path="/var/lib/kubelet/pods/360f4dc2-2dd6-4be1-b141-83ed024bf834/volumes" Jan 26 11:01:07 crc kubenswrapper[4619]: I0126 11:01:07.269025 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db3b37a6-b691-4c0d-ac00-8e26b839b5ca" path="/var/lib/kubelet/pods/db3b37a6-b691-4c0d-ac00-8e26b839b5ca/volumes" Jan 26 11:01:07 crc kubenswrapper[4619]: I0126 11:01:07.308943 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-577b5545c5-hdvhl"] Jan 26 11:01:07 crc kubenswrapper[4619]: I0126 11:01:07.567768 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-577b5545c5-hdvhl" event={"ID":"173a5123-d2b9-45f1-9b70-8b3a80a6bd71","Type":"ContainerStarted","Data":"5de632defc0f246589683910a0aa121f3ab7d0e576ed60825152075b3e417d6b"} Jan 26 11:01:07 crc kubenswrapper[4619]: I0126 11:01:07.567843 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-577b5545c5-hdvhl" event={"ID":"173a5123-d2b9-45f1-9b70-8b3a80a6bd71","Type":"ContainerStarted","Data":"bd9d8ffb6dca4ba94ad80728e92d74e46a9fd2b6e2476caf29f9be46e141f31f"} Jan 26 11:01:07 crc kubenswrapper[4619]: I0126 11:01:07.568005 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-577b5545c5-hdvhl" Jan 26 11:01:07 crc kubenswrapper[4619]: I0126 11:01:07.569469 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d88cb4b77-qhh6x" event={"ID":"7e84e1b2-ac73-4d6c-a004-f996b877a730","Type":"ContainerStarted","Data":"46afca3d4b5c281ae370c94b89ec97cc025cd370400ec41906dfd299857e2048"} Jan 26 11:01:07 crc kubenswrapper[4619]: I0126 11:01:07.569972 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d88cb4b77-qhh6x" event={"ID":"7e84e1b2-ac73-4d6c-a004-f996b877a730","Type":"ContainerStarted","Data":"b057e69a45dec286a59bf3b792bb9aac02406969b8d9ae4451d5c75e228be85d"} Jan 26 11:01:07 crc kubenswrapper[4619]: I0126 11:01:07.570347 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6d88cb4b77-qhh6x" Jan 26 11:01:07 crc kubenswrapper[4619]: I0126 11:01:07.578595 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6d88cb4b77-qhh6x" Jan 26 11:01:07 crc kubenswrapper[4619]: I0126 11:01:07.606379 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-577b5545c5-hdvhl" podStartSLOduration=3.6063570990000002 podStartE2EDuration="3.606357099s" podCreationTimestamp="2026-01-26 11:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:01:07.600931511 +0000 UTC m=+366.634972237" watchObservedRunningTime="2026-01-26 11:01:07.606357099 +0000 UTC m=+366.640397815" Jan 26 11:01:07 crc kubenswrapper[4619]: I0126 11:01:07.639496 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6d88cb4b77-qhh6x" podStartSLOduration=3.639475652 podStartE2EDuration="3.639475652s" podCreationTimestamp="2026-01-26 11:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:01:07.638749753 +0000 UTC m=+366.672790479" watchObservedRunningTime="2026-01-26 11:01:07.639475652 +0000 UTC m=+366.673516368" Jan 26 11:01:08 crc kubenswrapper[4619]: I0126 11:01:08.334739 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-577b5545c5-hdvhl" Jan 26 11:01:13 crc kubenswrapper[4619]: I0126 11:01:13.372538 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7fvpx"] Jan 26 11:01:13 crc kubenswrapper[4619]: I0126 11:01:13.373599 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-7fvpx" Jan 26 11:01:13 crc kubenswrapper[4619]: I0126 11:01:13.384915 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7fvpx"] Jan 26 11:01:13 crc kubenswrapper[4619]: I0126 11:01:13.431014 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/df858c3c-2b34-4a89-942c-6c1f590ac35e-bound-sa-token\") pod \"image-registry-66df7c8f76-7fvpx\" (UID: \"df858c3c-2b34-4a89-942c-6c1f590ac35e\") " pod="openshift-image-registry/image-registry-66df7c8f76-7fvpx" Jan 26 11:01:13 crc kubenswrapper[4619]: I0126 11:01:13.431186 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/df858c3c-2b34-4a89-942c-6c1f590ac35e-registry-certificates\") pod \"image-registry-66df7c8f76-7fvpx\" (UID: \"df858c3c-2b34-4a89-942c-6c1f590ac35e\") " pod="openshift-image-registry/image-registry-66df7c8f76-7fvpx" Jan 26 11:01:13 crc kubenswrapper[4619]: I0126 11:01:13.431274 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/df858c3c-2b34-4a89-942c-6c1f590ac35e-registry-tls\") pod \"image-registry-66df7c8f76-7fvpx\" (UID: \"df858c3c-2b34-4a89-942c-6c1f590ac35e\") " pod="openshift-image-registry/image-registry-66df7c8f76-7fvpx" Jan 26 11:01:13 crc kubenswrapper[4619]: I0126 11:01:13.431314 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj89m\" (UniqueName: \"kubernetes.io/projected/df858c3c-2b34-4a89-942c-6c1f590ac35e-kube-api-access-tj89m\") pod \"image-registry-66df7c8f76-7fvpx\" (UID: \"df858c3c-2b34-4a89-942c-6c1f590ac35e\") " pod="openshift-image-registry/image-registry-66df7c8f76-7fvpx" Jan 26 11:01:13 crc kubenswrapper[4619]: I0126 11:01:13.431339 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-7fvpx\" (UID: \"df858c3c-2b34-4a89-942c-6c1f590ac35e\") " pod="openshift-image-registry/image-registry-66df7c8f76-7fvpx" Jan 26 11:01:13 crc kubenswrapper[4619]: I0126 11:01:13.431406 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/df858c3c-2b34-4a89-942c-6c1f590ac35e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7fvpx\" (UID: \"df858c3c-2b34-4a89-942c-6c1f590ac35e\") " pod="openshift-image-registry/image-registry-66df7c8f76-7fvpx" Jan 26 11:01:13 crc kubenswrapper[4619]: I0126 11:01:13.431450 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df858c3c-2b34-4a89-942c-6c1f590ac35e-trusted-ca\") pod \"image-registry-66df7c8f76-7fvpx\" (UID: \"df858c3c-2b34-4a89-942c-6c1f590ac35e\") " pod="openshift-image-registry/image-registry-66df7c8f76-7fvpx" Jan 26 11:01:13 crc kubenswrapper[4619]: I0126 11:01:13.431483 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/df858c3c-2b34-4a89-942c-6c1f590ac35e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7fvpx\" (UID: \"df858c3c-2b34-4a89-942c-6c1f590ac35e\") " pod="openshift-image-registry/image-registry-66df7c8f76-7fvpx" Jan 26 11:01:13 crc kubenswrapper[4619]: I0126 11:01:13.458984 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-7fvpx\" (UID: \"df858c3c-2b34-4a89-942c-6c1f590ac35e\") " pod="openshift-image-registry/image-registry-66df7c8f76-7fvpx" Jan 26 11:01:13 crc kubenswrapper[4619]: I0126 11:01:13.532641 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/df858c3c-2b34-4a89-942c-6c1f590ac35e-bound-sa-token\") pod \"image-registry-66df7c8f76-7fvpx\" (UID: \"df858c3c-2b34-4a89-942c-6c1f590ac35e\") " pod="openshift-image-registry/image-registry-66df7c8f76-7fvpx" Jan 26 11:01:13 crc kubenswrapper[4619]: I0126 11:01:13.532933 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/df858c3c-2b34-4a89-942c-6c1f590ac35e-registry-certificates\") pod \"image-registry-66df7c8f76-7fvpx\" (UID: \"df858c3c-2b34-4a89-942c-6c1f590ac35e\") " pod="openshift-image-registry/image-registry-66df7c8f76-7fvpx" Jan 26 11:01:13 crc kubenswrapper[4619]: I0126 11:01:13.532966 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/df858c3c-2b34-4a89-942c-6c1f590ac35e-registry-tls\") pod \"image-registry-66df7c8f76-7fvpx\" (UID: \"df858c3c-2b34-4a89-942c-6c1f590ac35e\") " pod="openshift-image-registry/image-registry-66df7c8f76-7fvpx" Jan 26 11:01:13 crc kubenswrapper[4619]: I0126 11:01:13.532992 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj89m\" (UniqueName: \"kubernetes.io/projected/df858c3c-2b34-4a89-942c-6c1f590ac35e-kube-api-access-tj89m\") pod \"image-registry-66df7c8f76-7fvpx\" (UID: \"df858c3c-2b34-4a89-942c-6c1f590ac35e\") " pod="openshift-image-registry/image-registry-66df7c8f76-7fvpx" Jan 26 11:01:13 crc kubenswrapper[4619]: I0126 11:01:13.533020 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/df858c3c-2b34-4a89-942c-6c1f590ac35e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7fvpx\" (UID: \"df858c3c-2b34-4a89-942c-6c1f590ac35e\") " pod="openshift-image-registry/image-registry-66df7c8f76-7fvpx" Jan 26 11:01:13 crc kubenswrapper[4619]: I0126 11:01:13.533041 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df858c3c-2b34-4a89-942c-6c1f590ac35e-trusted-ca\") pod \"image-registry-66df7c8f76-7fvpx\" (UID: \"df858c3c-2b34-4a89-942c-6c1f590ac35e\") " pod="openshift-image-registry/image-registry-66df7c8f76-7fvpx" Jan 26 11:01:13 crc kubenswrapper[4619]: I0126 11:01:13.533062 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/df858c3c-2b34-4a89-942c-6c1f590ac35e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7fvpx\" (UID: \"df858c3c-2b34-4a89-942c-6c1f590ac35e\") " pod="openshift-image-registry/image-registry-66df7c8f76-7fvpx" Jan 26 11:01:13 crc kubenswrapper[4619]: I0126 11:01:13.534503 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df858c3c-2b34-4a89-942c-6c1f590ac35e-trusted-ca\") pod \"image-registry-66df7c8f76-7fvpx\" (UID: \"df858c3c-2b34-4a89-942c-6c1f590ac35e\") " pod="openshift-image-registry/image-registry-66df7c8f76-7fvpx" Jan 26 11:01:13 crc kubenswrapper[4619]: I0126 11:01:13.534561 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/df858c3c-2b34-4a89-942c-6c1f590ac35e-registry-certificates\") pod \"image-registry-66df7c8f76-7fvpx\" (UID: \"df858c3c-2b34-4a89-942c-6c1f590ac35e\") " pod="openshift-image-registry/image-registry-66df7c8f76-7fvpx" Jan 26 11:01:13 crc kubenswrapper[4619]: I0126 11:01:13.534950 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/df858c3c-2b34-4a89-942c-6c1f590ac35e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7fvpx\" (UID: \"df858c3c-2b34-4a89-942c-6c1f590ac35e\") " pod="openshift-image-registry/image-registry-66df7c8f76-7fvpx" Jan 26 11:01:13 crc kubenswrapper[4619]: I0126 11:01:13.540406 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/df858c3c-2b34-4a89-942c-6c1f590ac35e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7fvpx\" (UID: \"df858c3c-2b34-4a89-942c-6c1f590ac35e\") " pod="openshift-image-registry/image-registry-66df7c8f76-7fvpx" Jan 26 11:01:13 crc kubenswrapper[4619]: I0126 11:01:13.540659 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/df858c3c-2b34-4a89-942c-6c1f590ac35e-registry-tls\") pod \"image-registry-66df7c8f76-7fvpx\" (UID: \"df858c3c-2b34-4a89-942c-6c1f590ac35e\") " pod="openshift-image-registry/image-registry-66df7c8f76-7fvpx" Jan 26 11:01:13 crc kubenswrapper[4619]: I0126 11:01:13.549592 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/df858c3c-2b34-4a89-942c-6c1f590ac35e-bound-sa-token\") pod \"image-registry-66df7c8f76-7fvpx\" (UID: \"df858c3c-2b34-4a89-942c-6c1f590ac35e\") " pod="openshift-image-registry/image-registry-66df7c8f76-7fvpx" Jan 26 11:01:13 crc kubenswrapper[4619]: I0126 11:01:13.551213 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj89m\" (UniqueName: \"kubernetes.io/projected/df858c3c-2b34-4a89-942c-6c1f590ac35e-kube-api-access-tj89m\") pod \"image-registry-66df7c8f76-7fvpx\" (UID: \"df858c3c-2b34-4a89-942c-6c1f590ac35e\") " pod="openshift-image-registry/image-registry-66df7c8f76-7fvpx" Jan 26 11:01:13 crc kubenswrapper[4619]: I0126 11:01:13.693444 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-7fvpx" Jan 26 11:01:14 crc kubenswrapper[4619]: I0126 11:01:14.127173 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7fvpx"] Jan 26 11:01:14 crc kubenswrapper[4619]: I0126 11:01:14.239192 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:01:14 crc kubenswrapper[4619]: I0126 11:01:14.239273 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:01:14 crc kubenswrapper[4619]: I0126 11:01:14.603793 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-7fvpx" event={"ID":"df858c3c-2b34-4a89-942c-6c1f590ac35e","Type":"ContainerStarted","Data":"7e38dd1173038dedb7d175e18b6715c1bf1c993c02c92c3765c2664b2ddb6c99"} Jan 26 11:01:14 crc kubenswrapper[4619]: I0126 11:01:14.603842 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-7fvpx" event={"ID":"df858c3c-2b34-4a89-942c-6c1f590ac35e","Type":"ContainerStarted","Data":"f4ce3eb222f6f310045ecde89d2fcb601c34cb91bd768f80eb26a0df0e8fd1de"} Jan 26 11:01:14 crc kubenswrapper[4619]: I0126 11:01:14.604825 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-7fvpx" Jan 26 11:01:14 crc kubenswrapper[4619]: I0126 11:01:14.624520 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-7fvpx" podStartSLOduration=1.624503413 podStartE2EDuration="1.624503413s" podCreationTimestamp="2026-01-26 11:01:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:01:14.62041688 +0000 UTC m=+373.654457676" watchObservedRunningTime="2026-01-26 11:01:14.624503413 +0000 UTC m=+373.658544129" Jan 26 11:01:33 crc kubenswrapper[4619]: I0126 11:01:33.711077 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-7fvpx" Jan 26 11:01:33 crc kubenswrapper[4619]: I0126 11:01:33.817899 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-84lr5"] Jan 26 11:01:44 crc kubenswrapper[4619]: I0126 11:01:44.235103 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:01:44 crc kubenswrapper[4619]: I0126 11:01:44.236020 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:01:58 crc kubenswrapper[4619]: I0126 11:01:58.913782 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" podUID="c4960ad8-430c-46d0-bfc5-3fb9fcd647a6" containerName="registry" containerID="cri-o://be7153fae68659e0fa817b2f8082f6d124a8cf9a6d644a7ef63861792aacc96a" gracePeriod=30 Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.300033 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.494709 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-bound-sa-token\") pod \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.494760 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjmc9\" (UniqueName: \"kubernetes.io/projected/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-kube-api-access-tjmc9\") pod \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.494836 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-installation-pull-secrets\") pod \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.494881 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-trusted-ca\") pod \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.494907 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-registry-tls\") pod \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.495705 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.495834 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.496916 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-registry-certificates\") pod \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.496949 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-ca-trust-extracted\") pod \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\" (UID: \"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6\") " Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.497254 4619 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.498298 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.501034 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.501202 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.503064 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.504280 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.504778 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-kube-api-access-tjmc9" (OuterVolumeSpecName: "kube-api-access-tjmc9") pod "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6"). InnerVolumeSpecName "kube-api-access-tjmc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.518002 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6" (UID: "c4960ad8-430c-46d0-bfc5-3fb9fcd647a6"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.598820 4619 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.598882 4619 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.598893 4619 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.598902 4619 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.598928 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjmc9\" (UniqueName: \"kubernetes.io/projected/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-kube-api-access-tjmc9\") on node \"crc\" DevicePath \"\"" Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.598938 4619 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.855136 4619 generic.go:334] "Generic (PLEG): container finished" podID="c4960ad8-430c-46d0-bfc5-3fb9fcd647a6" containerID="be7153fae68659e0fa817b2f8082f6d124a8cf9a6d644a7ef63861792aacc96a" exitCode=0 Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.855188 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" event={"ID":"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6","Type":"ContainerDied","Data":"be7153fae68659e0fa817b2f8082f6d124a8cf9a6d644a7ef63861792aacc96a"} Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.855226 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" event={"ID":"c4960ad8-430c-46d0-bfc5-3fb9fcd647a6","Type":"ContainerDied","Data":"c3a36e3831b833e5679daea878971a7d8fa3c37a098ecb26f320e8c108f2ab81"} Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.855251 4619 scope.go:117] "RemoveContainer" containerID="be7153fae68659e0fa817b2f8082f6d124a8cf9a6d644a7ef63861792aacc96a" Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.855690 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-84lr5" Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.884496 4619 scope.go:117] "RemoveContainer" containerID="be7153fae68659e0fa817b2f8082f6d124a8cf9a6d644a7ef63861792aacc96a" Jan 26 11:01:59 crc kubenswrapper[4619]: E0126 11:01:59.885118 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be7153fae68659e0fa817b2f8082f6d124a8cf9a6d644a7ef63861792aacc96a\": container with ID starting with be7153fae68659e0fa817b2f8082f6d124a8cf9a6d644a7ef63861792aacc96a not found: ID does not exist" containerID="be7153fae68659e0fa817b2f8082f6d124a8cf9a6d644a7ef63861792aacc96a" Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.885235 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be7153fae68659e0fa817b2f8082f6d124a8cf9a6d644a7ef63861792aacc96a"} err="failed to get container status \"be7153fae68659e0fa817b2f8082f6d124a8cf9a6d644a7ef63861792aacc96a\": rpc error: code = NotFound desc = could not find container \"be7153fae68659e0fa817b2f8082f6d124a8cf9a6d644a7ef63861792aacc96a\": container with ID starting with be7153fae68659e0fa817b2f8082f6d124a8cf9a6d644a7ef63861792aacc96a not found: ID does not exist" Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.903822 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-84lr5"] Jan 26 11:01:59 crc kubenswrapper[4619]: I0126 11:01:59.914218 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-84lr5"] Jan 26 11:02:01 crc kubenswrapper[4619]: I0126 11:02:01.275084 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4960ad8-430c-46d0-bfc5-3fb9fcd647a6" path="/var/lib/kubelet/pods/c4960ad8-430c-46d0-bfc5-3fb9fcd647a6/volumes" Jan 26 11:02:14 crc kubenswrapper[4619]: I0126 11:02:14.234691 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:02:14 crc kubenswrapper[4619]: I0126 11:02:14.235293 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:02:14 crc kubenswrapper[4619]: I0126 11:02:14.235345 4619 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" Jan 26 11:02:14 crc kubenswrapper[4619]: I0126 11:02:14.236018 4619 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cadde2c282e632013097b122d7a86397094a5e7b66ec355994b21f7cc038c588"} pod="openshift-machine-config-operator/machine-config-daemon-28hd4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 11:02:14 crc kubenswrapper[4619]: I0126 11:02:14.236084 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" containerID="cri-o://cadde2c282e632013097b122d7a86397094a5e7b66ec355994b21f7cc038c588" gracePeriod=600 Jan 26 11:02:14 crc kubenswrapper[4619]: I0126 11:02:14.931721 4619 generic.go:334] "Generic (PLEG): container finished" podID="f33a41bb-6406-4c73-8024-4acd72817832" containerID="cadde2c282e632013097b122d7a86397094a5e7b66ec355994b21f7cc038c588" exitCode=0 Jan 26 11:02:14 crc kubenswrapper[4619]: I0126 11:02:14.931800 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerDied","Data":"cadde2c282e632013097b122d7a86397094a5e7b66ec355994b21f7cc038c588"} Jan 26 11:02:14 crc kubenswrapper[4619]: I0126 11:02:14.932092 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerStarted","Data":"9df64a75c583c4fdad8b268bc330fc4096084cc7d46cd0ce53eaf7504d309d1e"} Jan 26 11:02:14 crc kubenswrapper[4619]: I0126 11:02:14.932121 4619 scope.go:117] "RemoveContainer" containerID="955ffc560e93abc33f313fd19772d0f0455e46a719c4fe1f86c14d0ff138a7dd" Jan 26 11:04:14 crc kubenswrapper[4619]: I0126 11:04:14.234372 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:04:14 crc kubenswrapper[4619]: I0126 11:04:14.234935 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:04:44 crc kubenswrapper[4619]: I0126 11:04:44.234854 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:04:44 crc kubenswrapper[4619]: I0126 11:04:44.235912 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:05:14 crc kubenswrapper[4619]: I0126 11:05:14.234574 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:05:14 crc kubenswrapper[4619]: I0126 11:05:14.235342 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:05:14 crc kubenswrapper[4619]: I0126 11:05:14.235424 4619 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" Jan 26 11:05:14 crc kubenswrapper[4619]: I0126 11:05:14.236251 4619 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9df64a75c583c4fdad8b268bc330fc4096084cc7d46cd0ce53eaf7504d309d1e"} pod="openshift-machine-config-operator/machine-config-daemon-28hd4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 11:05:14 crc kubenswrapper[4619]: I0126 11:05:14.236332 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" containerID="cri-o://9df64a75c583c4fdad8b268bc330fc4096084cc7d46cd0ce53eaf7504d309d1e" gracePeriod=600 Jan 26 11:05:15 crc kubenswrapper[4619]: I0126 11:05:15.034350 4619 generic.go:334] "Generic (PLEG): container finished" podID="f33a41bb-6406-4c73-8024-4acd72817832" containerID="9df64a75c583c4fdad8b268bc330fc4096084cc7d46cd0ce53eaf7504d309d1e" exitCode=0 Jan 26 11:05:15 crc kubenswrapper[4619]: I0126 11:05:15.034393 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerDied","Data":"9df64a75c583c4fdad8b268bc330fc4096084cc7d46cd0ce53eaf7504d309d1e"} Jan 26 11:05:15 crc kubenswrapper[4619]: I0126 11:05:15.034698 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerStarted","Data":"6738b1743914672b9048dd0ac0796d6733f8691d82b90a8907b894bf9b7c51fb"} Jan 26 11:05:15 crc kubenswrapper[4619]: I0126 11:05:15.034724 4619 scope.go:117] "RemoveContainer" containerID="cadde2c282e632013097b122d7a86397094a5e7b66ec355994b21f7cc038c588" Jan 26 11:05:28 crc kubenswrapper[4619]: I0126 11:05:28.337686 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-nnd7f"] Jan 26 11:05:28 crc kubenswrapper[4619]: E0126 11:05:28.338471 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4960ad8-430c-46d0-bfc5-3fb9fcd647a6" containerName="registry" Jan 26 11:05:28 crc kubenswrapper[4619]: I0126 11:05:28.338488 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4960ad8-430c-46d0-bfc5-3fb9fcd647a6" containerName="registry" Jan 26 11:05:28 crc kubenswrapper[4619]: I0126 11:05:28.338598 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4960ad8-430c-46d0-bfc5-3fb9fcd647a6" containerName="registry" Jan 26 11:05:28 crc kubenswrapper[4619]: I0126 11:05:28.339078 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-nnd7f" Jan 26 11:05:28 crc kubenswrapper[4619]: I0126 11:05:28.341171 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 26 11:05:28 crc kubenswrapper[4619]: I0126 11:05:28.342482 4619 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-8sk8k" Jan 26 11:05:28 crc kubenswrapper[4619]: I0126 11:05:28.353832 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 26 11:05:28 crc kubenswrapper[4619]: I0126 11:05:28.357545 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-pzvld"] Jan 26 11:05:28 crc kubenswrapper[4619]: I0126 11:05:28.358359 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-pzvld" Jan 26 11:05:28 crc kubenswrapper[4619]: I0126 11:05:28.361161 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-nnd7f"] Jan 26 11:05:28 crc kubenswrapper[4619]: I0126 11:05:28.363529 4619 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-bzx9t" Jan 26 11:05:28 crc kubenswrapper[4619]: I0126 11:05:28.367399 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-46rtj"] Jan 26 11:05:28 crc kubenswrapper[4619]: I0126 11:05:28.368042 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-46rtj" Jan 26 11:05:28 crc kubenswrapper[4619]: I0126 11:05:28.371324 4619 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-9bdqz" Jan 26 11:05:28 crc kubenswrapper[4619]: I0126 11:05:28.381989 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-46rtj"] Jan 26 11:05:28 crc kubenswrapper[4619]: I0126 11:05:28.382841 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6psd5\" (UniqueName: \"kubernetes.io/projected/b4ddb0da-8d36-41cf-a6f1-f02a48086888-kube-api-access-6psd5\") pod \"cert-manager-858654f9db-pzvld\" (UID: \"b4ddb0da-8d36-41cf-a6f1-f02a48086888\") " pod="cert-manager/cert-manager-858654f9db-pzvld" Jan 26 11:05:28 crc kubenswrapper[4619]: I0126 11:05:28.382896 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbr4w\" (UniqueName: \"kubernetes.io/projected/e256cee0-a4f8-46ca-bad9-4abc6bf31216-kube-api-access-gbr4w\") pod \"cert-manager-webhook-687f57d79b-46rtj\" (UID: \"e256cee0-a4f8-46ca-bad9-4abc6bf31216\") " pod="cert-manager/cert-manager-webhook-687f57d79b-46rtj" Jan 26 11:05:28 crc kubenswrapper[4619]: I0126 11:05:28.382915 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4xsm\" (UniqueName: \"kubernetes.io/projected/cb296f0e-c4e5-4b2b-82de-49af144cbf77-kube-api-access-g4xsm\") pod \"cert-manager-cainjector-cf98fcc89-nnd7f\" (UID: \"cb296f0e-c4e5-4b2b-82de-49af144cbf77\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-nnd7f" Jan 26 11:05:28 crc kubenswrapper[4619]: I0126 11:05:28.388074 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-pzvld"] Jan 26 11:05:28 crc kubenswrapper[4619]: I0126 11:05:28.483713 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4xsm\" (UniqueName: \"kubernetes.io/projected/cb296f0e-c4e5-4b2b-82de-49af144cbf77-kube-api-access-g4xsm\") pod \"cert-manager-cainjector-cf98fcc89-nnd7f\" (UID: \"cb296f0e-c4e5-4b2b-82de-49af144cbf77\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-nnd7f" Jan 26 11:05:28 crc kubenswrapper[4619]: I0126 11:05:28.483754 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbr4w\" (UniqueName: \"kubernetes.io/projected/e256cee0-a4f8-46ca-bad9-4abc6bf31216-kube-api-access-gbr4w\") pod \"cert-manager-webhook-687f57d79b-46rtj\" (UID: \"e256cee0-a4f8-46ca-bad9-4abc6bf31216\") " pod="cert-manager/cert-manager-webhook-687f57d79b-46rtj" Jan 26 11:05:28 crc kubenswrapper[4619]: I0126 11:05:28.483933 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6psd5\" (UniqueName: \"kubernetes.io/projected/b4ddb0da-8d36-41cf-a6f1-f02a48086888-kube-api-access-6psd5\") pod \"cert-manager-858654f9db-pzvld\" (UID: \"b4ddb0da-8d36-41cf-a6f1-f02a48086888\") " pod="cert-manager/cert-manager-858654f9db-pzvld" Jan 26 11:05:28 crc kubenswrapper[4619]: I0126 11:05:28.503704 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbr4w\" (UniqueName: \"kubernetes.io/projected/e256cee0-a4f8-46ca-bad9-4abc6bf31216-kube-api-access-gbr4w\") pod \"cert-manager-webhook-687f57d79b-46rtj\" (UID: \"e256cee0-a4f8-46ca-bad9-4abc6bf31216\") " pod="cert-manager/cert-manager-webhook-687f57d79b-46rtj" Jan 26 11:05:28 crc kubenswrapper[4619]: I0126 11:05:28.504709 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6psd5\" (UniqueName: \"kubernetes.io/projected/b4ddb0da-8d36-41cf-a6f1-f02a48086888-kube-api-access-6psd5\") pod \"cert-manager-858654f9db-pzvld\" (UID: \"b4ddb0da-8d36-41cf-a6f1-f02a48086888\") " pod="cert-manager/cert-manager-858654f9db-pzvld" Jan 26 11:05:28 crc kubenswrapper[4619]: I0126 11:05:28.505892 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4xsm\" (UniqueName: \"kubernetes.io/projected/cb296f0e-c4e5-4b2b-82de-49af144cbf77-kube-api-access-g4xsm\") pod \"cert-manager-cainjector-cf98fcc89-nnd7f\" (UID: \"cb296f0e-c4e5-4b2b-82de-49af144cbf77\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-nnd7f" Jan 26 11:05:28 crc kubenswrapper[4619]: I0126 11:05:28.652153 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-nnd7f" Jan 26 11:05:28 crc kubenswrapper[4619]: I0126 11:05:28.669459 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-pzvld" Jan 26 11:05:28 crc kubenswrapper[4619]: I0126 11:05:28.680237 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-46rtj" Jan 26 11:05:28 crc kubenswrapper[4619]: I0126 11:05:28.951465 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-46rtj"] Jan 26 11:05:28 crc kubenswrapper[4619]: I0126 11:05:28.965539 4619 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 11:05:29 crc kubenswrapper[4619]: I0126 11:05:29.107482 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-nnd7f"] Jan 26 11:05:29 crc kubenswrapper[4619]: I0126 11:05:29.112024 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-pzvld"] Jan 26 11:05:29 crc kubenswrapper[4619]: W0126 11:05:29.113937 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4ddb0da_8d36_41cf_a6f1_f02a48086888.slice/crio-24d3ad73d12630ba1d2bcbf3c719729ae499fd06fe29872f65d1e14940a43359 WatchSource:0}: Error finding container 24d3ad73d12630ba1d2bcbf3c719729ae499fd06fe29872f65d1e14940a43359: Status 404 returned error can't find the container with id 24d3ad73d12630ba1d2bcbf3c719729ae499fd06fe29872f65d1e14940a43359 Jan 26 11:05:29 crc kubenswrapper[4619]: I0126 11:05:29.114363 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-46rtj" event={"ID":"e256cee0-a4f8-46ca-bad9-4abc6bf31216","Type":"ContainerStarted","Data":"d1315681ca93d09d3f0ecfec16c4b3acbd7e51ad6f6bc77a06756050cf081113"} Jan 26 11:05:29 crc kubenswrapper[4619]: W0126 11:05:29.114687 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb296f0e_c4e5_4b2b_82de_49af144cbf77.slice/crio-ed177139bc9c00a1c771fb6631e0d89f3ab55c40e4f69d4e337060a25b40b379 WatchSource:0}: Error finding container ed177139bc9c00a1c771fb6631e0d89f3ab55c40e4f69d4e337060a25b40b379: Status 404 returned error can't find the container with id ed177139bc9c00a1c771fb6631e0d89f3ab55c40e4f69d4e337060a25b40b379 Jan 26 11:05:30 crc kubenswrapper[4619]: I0126 11:05:30.120560 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-nnd7f" event={"ID":"cb296f0e-c4e5-4b2b-82de-49af144cbf77","Type":"ContainerStarted","Data":"ed177139bc9c00a1c771fb6631e0d89f3ab55c40e4f69d4e337060a25b40b379"} Jan 26 11:05:30 crc kubenswrapper[4619]: I0126 11:05:30.121801 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-pzvld" event={"ID":"b4ddb0da-8d36-41cf-a6f1-f02a48086888","Type":"ContainerStarted","Data":"24d3ad73d12630ba1d2bcbf3c719729ae499fd06fe29872f65d1e14940a43359"} Jan 26 11:05:34 crc kubenswrapper[4619]: I0126 11:05:34.156776 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-nnd7f" event={"ID":"cb296f0e-c4e5-4b2b-82de-49af144cbf77","Type":"ContainerStarted","Data":"6e7b1b2d40cb5b39039be7c5ac47cc556e1e90fd5af1750f81a3b857709d70ec"} Jan 26 11:05:34 crc kubenswrapper[4619]: I0126 11:05:34.160817 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-46rtj" event={"ID":"e256cee0-a4f8-46ca-bad9-4abc6bf31216","Type":"ContainerStarted","Data":"1ee22633171ef8f9393d85e0a6005aa0e00e77ff8751ff7d5660fff423acbd62"} Jan 26 11:05:35 crc kubenswrapper[4619]: I0126 11:05:35.196969 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-46rtj" podStartSLOduration=3.143783632 podStartE2EDuration="7.196943942s" podCreationTimestamp="2026-01-26 11:05:28 +0000 UTC" firstStartedPulling="2026-01-26 11:05:28.965262109 +0000 UTC m=+627.999302815" lastFinishedPulling="2026-01-26 11:05:33.018422389 +0000 UTC m=+632.052463125" observedRunningTime="2026-01-26 11:05:35.18175982 +0000 UTC m=+634.215800546" watchObservedRunningTime="2026-01-26 11:05:35.196943942 +0000 UTC m=+634.230984658" Jan 26 11:05:35 crc kubenswrapper[4619]: I0126 11:05:35.211016 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-nnd7f" podStartSLOduration=3.310650234 podStartE2EDuration="7.210995368s" podCreationTimestamp="2026-01-26 11:05:28 +0000 UTC" firstStartedPulling="2026-01-26 11:05:29.11724768 +0000 UTC m=+628.151288396" lastFinishedPulling="2026-01-26 11:05:33.017592824 +0000 UTC m=+632.051633530" observedRunningTime="2026-01-26 11:05:35.206485828 +0000 UTC m=+634.240526544" watchObservedRunningTime="2026-01-26 11:05:35.210995368 +0000 UTC m=+634.245036084" Jan 26 11:05:37 crc kubenswrapper[4619]: I0126 11:05:37.177252 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-pzvld" event={"ID":"b4ddb0da-8d36-41cf-a6f1-f02a48086888","Type":"ContainerStarted","Data":"765c22b9a7f2c82230649f8fc249a51c0b5baeb459aef5209d491d7e8ad63d7a"} Jan 26 11:05:37 crc kubenswrapper[4619]: I0126 11:05:37.193098 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-pzvld" podStartSLOduration=2.403661278 podStartE2EDuration="9.193077261s" podCreationTimestamp="2026-01-26 11:05:28 +0000 UTC" firstStartedPulling="2026-01-26 11:05:29.117817507 +0000 UTC m=+628.151858223" lastFinishedPulling="2026-01-26 11:05:35.90723349 +0000 UTC m=+634.941274206" observedRunningTime="2026-01-26 11:05:37.191003956 +0000 UTC m=+636.225044672" watchObservedRunningTime="2026-01-26 11:05:37.193077261 +0000 UTC m=+636.227117987" Jan 26 11:05:38 crc kubenswrapper[4619]: I0126 11:05:38.192154 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-b6xtv"] Jan 26 11:05:38 crc kubenswrapper[4619]: I0126 11:05:38.192896 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="ovn-controller" containerID="cri-o://8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e" gracePeriod=30 Jan 26 11:05:38 crc kubenswrapper[4619]: I0126 11:05:38.193265 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="sbdb" containerID="cri-o://7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee" gracePeriod=30 Jan 26 11:05:38 crc kubenswrapper[4619]: I0126 11:05:38.193326 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="nbdb" containerID="cri-o://a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8" gracePeriod=30 Jan 26 11:05:38 crc kubenswrapper[4619]: I0126 11:05:38.193367 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="northd" containerID="cri-o://732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e" gracePeriod=30 Jan 26 11:05:38 crc kubenswrapper[4619]: I0126 11:05:38.193418 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f" gracePeriod=30 Jan 26 11:05:38 crc kubenswrapper[4619]: I0126 11:05:38.193456 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="kube-rbac-proxy-node" containerID="cri-o://cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958" gracePeriod=30 Jan 26 11:05:38 crc kubenswrapper[4619]: I0126 11:05:38.193500 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="ovn-acl-logging" containerID="cri-o://a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8" gracePeriod=30 Jan 26 11:05:38 crc kubenswrapper[4619]: I0126 11:05:38.232674 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="ovnkube-controller" containerID="cri-o://f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c" gracePeriod=30 Jan 26 11:05:38 crc kubenswrapper[4619]: I0126 11:05:38.680423 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-46rtj" Jan 26 11:05:38 crc kubenswrapper[4619]: I0126 11:05:38.682477 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-46rtj" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.029687 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6xtv_9ed93d0d-0709-4425-b378-6b8a15318070/ovnkube-controller/3.log" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.032026 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6xtv_9ed93d0d-0709-4425-b378-6b8a15318070/ovn-acl-logging/0.log" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.032544 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6xtv_9ed93d0d-0709-4425-b378-6b8a15318070/ovn-controller/0.log" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.033178 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.093330 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-w8854"] Jan 26 11:05:39 crc kubenswrapper[4619]: E0126 11:05:39.093522 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="sbdb" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.093533 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="sbdb" Jan 26 11:05:39 crc kubenswrapper[4619]: E0126 11:05:39.093543 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="nbdb" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.093549 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="nbdb" Jan 26 11:05:39 crc kubenswrapper[4619]: E0126 11:05:39.093558 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="ovn-controller" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.093564 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="ovn-controller" Jan 26 11:05:39 crc kubenswrapper[4619]: E0126 11:05:39.093574 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="ovnkube-controller" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.093579 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="ovnkube-controller" Jan 26 11:05:39 crc kubenswrapper[4619]: E0126 11:05:39.093587 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.093594 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 11:05:39 crc kubenswrapper[4619]: E0126 11:05:39.093603 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="kubecfg-setup" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.093608 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="kubecfg-setup" Jan 26 11:05:39 crc kubenswrapper[4619]: E0126 11:05:39.093632 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="northd" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.093638 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="northd" Jan 26 11:05:39 crc kubenswrapper[4619]: E0126 11:05:39.093648 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="ovn-acl-logging" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.093655 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="ovn-acl-logging" Jan 26 11:05:39 crc kubenswrapper[4619]: E0126 11:05:39.093663 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="kube-rbac-proxy-node" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.093669 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="kube-rbac-proxy-node" Jan 26 11:05:39 crc kubenswrapper[4619]: E0126 11:05:39.093678 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="ovnkube-controller" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.093685 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="ovnkube-controller" Jan 26 11:05:39 crc kubenswrapper[4619]: E0126 11:05:39.093696 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="ovnkube-controller" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.093702 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="ovnkube-controller" Jan 26 11:05:39 crc kubenswrapper[4619]: E0126 11:05:39.093710 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="ovnkube-controller" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.093716 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="ovnkube-controller" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.093793 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="ovnkube-controller" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.093803 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="nbdb" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.093811 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.093820 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="sbdb" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.093828 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="northd" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.093836 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="ovn-acl-logging" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.093842 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="ovn-controller" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.093850 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="ovnkube-controller" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.093857 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="ovnkube-controller" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.093863 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="kube-rbac-proxy-node" Jan 26 11:05:39 crc kubenswrapper[4619]: E0126 11:05:39.093956 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="ovnkube-controller" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.093964 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="ovnkube-controller" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.094039 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="ovnkube-controller" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.094229 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" containerName="ovnkube-controller" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.095498 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.123605 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9ed93d0d-0709-4425-b378-6b8a15318070-ovn-node-metrics-cert\") pod \"9ed93d0d-0709-4425-b378-6b8a15318070\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.123932 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-etc-openvswitch\") pod \"9ed93d0d-0709-4425-b378-6b8a15318070\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.124071 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "9ed93d0d-0709-4425-b378-6b8a15318070" (UID: "9ed93d0d-0709-4425-b378-6b8a15318070"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.124255 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kls7n\" (UniqueName: \"kubernetes.io/projected/9ed93d0d-0709-4425-b378-6b8a15318070-kube-api-access-kls7n\") pod \"9ed93d0d-0709-4425-b378-6b8a15318070\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.124423 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9ed93d0d-0709-4425-b378-6b8a15318070-ovnkube-script-lib\") pod \"9ed93d0d-0709-4425-b378-6b8a15318070\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.124595 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-run-openvswitch\") pod \"9ed93d0d-0709-4425-b378-6b8a15318070\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.124775 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-run-ovn\") pod \"9ed93d0d-0709-4425-b378-6b8a15318070\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.124922 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-run-ovn-kubernetes\") pod \"9ed93d0d-0709-4425-b378-6b8a15318070\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.124818 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ed93d0d-0709-4425-b378-6b8a15318070-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "9ed93d0d-0709-4425-b378-6b8a15318070" (UID: "9ed93d0d-0709-4425-b378-6b8a15318070"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.124852 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "9ed93d0d-0709-4425-b378-6b8a15318070" (UID: "9ed93d0d-0709-4425-b378-6b8a15318070"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.124873 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "9ed93d0d-0709-4425-b378-6b8a15318070" (UID: "9ed93d0d-0709-4425-b378-6b8a15318070"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.125030 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "9ed93d0d-0709-4425-b378-6b8a15318070" (UID: "9ed93d0d-0709-4425-b378-6b8a15318070"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.125059 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-node-log\") pod \"9ed93d0d-0709-4425-b378-6b8a15318070\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.125177 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-run-netns\") pod \"9ed93d0d-0709-4425-b378-6b8a15318070\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.125214 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-slash\") pod \"9ed93d0d-0709-4425-b378-6b8a15318070\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.125270 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-systemd-units\") pod \"9ed93d0d-0709-4425-b378-6b8a15318070\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.125355 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-var-lib-cni-networks-ovn-kubernetes\") pod \"9ed93d0d-0709-4425-b378-6b8a15318070\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.125401 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-kubelet\") pod \"9ed93d0d-0709-4425-b378-6b8a15318070\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.125434 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-log-socket\") pod \"9ed93d0d-0709-4425-b378-6b8a15318070\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.125506 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-cni-netd\") pod \"9ed93d0d-0709-4425-b378-6b8a15318070\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.125563 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9ed93d0d-0709-4425-b378-6b8a15318070-env-overrides\") pod \"9ed93d0d-0709-4425-b378-6b8a15318070\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.125607 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9ed93d0d-0709-4425-b378-6b8a15318070-ovnkube-config\") pod \"9ed93d0d-0709-4425-b378-6b8a15318070\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.125678 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-run-systemd\") pod \"9ed93d0d-0709-4425-b378-6b8a15318070\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.125720 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-var-lib-openvswitch\") pod \"9ed93d0d-0709-4425-b378-6b8a15318070\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.125757 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-cni-bin\") pod \"9ed93d0d-0709-4425-b378-6b8a15318070\" (UID: \"9ed93d0d-0709-4425-b378-6b8a15318070\") " Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.125927 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-node-log" (OuterVolumeSpecName: "node-log") pod "9ed93d0d-0709-4425-b378-6b8a15318070" (UID: "9ed93d0d-0709-4425-b378-6b8a15318070"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.126030 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-slash" (OuterVolumeSpecName: "host-slash") pod "9ed93d0d-0709-4425-b378-6b8a15318070" (UID: "9ed93d0d-0709-4425-b378-6b8a15318070"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.126081 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "9ed93d0d-0709-4425-b378-6b8a15318070" (UID: "9ed93d0d-0709-4425-b378-6b8a15318070"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.126122 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "9ed93d0d-0709-4425-b378-6b8a15318070" (UID: "9ed93d0d-0709-4425-b378-6b8a15318070"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.126165 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "9ed93d0d-0709-4425-b378-6b8a15318070" (UID: "9ed93d0d-0709-4425-b378-6b8a15318070"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.126202 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-log-socket" (OuterVolumeSpecName: "log-socket") pod "9ed93d0d-0709-4425-b378-6b8a15318070" (UID: "9ed93d0d-0709-4425-b378-6b8a15318070"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.126238 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "9ed93d0d-0709-4425-b378-6b8a15318070" (UID: "9ed93d0d-0709-4425-b378-6b8a15318070"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.125939 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-host-slash\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.126539 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-systemd-units\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.126739 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-log-socket\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.127104 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-host-run-ovn-kubernetes\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.127275 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/71ad1032-2923-4d44-86a5-68d9c56fc1b9-ovn-node-metrics-cert\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.127437 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-var-lib-openvswitch\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.127596 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-etc-openvswitch\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.127802 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/71ad1032-2923-4d44-86a5-68d9c56fc1b9-ovnkube-script-lib\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.127964 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kt2m\" (UniqueName: \"kubernetes.io/projected/71ad1032-2923-4d44-86a5-68d9c56fc1b9-kube-api-access-6kt2m\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.128110 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-node-log\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.128280 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-host-cni-netd\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.128458 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/71ad1032-2923-4d44-86a5-68d9c56fc1b9-ovnkube-config\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.128684 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-host-kubelet\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.128855 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-run-openvswitch\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.126898 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ed93d0d-0709-4425-b378-6b8a15318070-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "9ed93d0d-0709-4425-b378-6b8a15318070" (UID: "9ed93d0d-0709-4425-b378-6b8a15318070"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.127306 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "9ed93d0d-0709-4425-b378-6b8a15318070" (UID: "9ed93d0d-0709-4425-b378-6b8a15318070"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.127609 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ed93d0d-0709-4425-b378-6b8a15318070-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "9ed93d0d-0709-4425-b378-6b8a15318070" (UID: "9ed93d0d-0709-4425-b378-6b8a15318070"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.127680 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "9ed93d0d-0709-4425-b378-6b8a15318070" (UID: "9ed93d0d-0709-4425-b378-6b8a15318070"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.125998 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "9ed93d0d-0709-4425-b378-6b8a15318070" (UID: "9ed93d0d-0709-4425-b378-6b8a15318070"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.129402 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-host-cni-bin\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.129553 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/71ad1032-2923-4d44-86a5-68d9c56fc1b9-env-overrides\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.129762 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.129914 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-host-run-netns\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.130043 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-run-ovn\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.130236 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-run-systemd\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.130335 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ed93d0d-0709-4425-b378-6b8a15318070-kube-api-access-kls7n" (OuterVolumeSpecName: "kube-api-access-kls7n") pod "9ed93d0d-0709-4425-b378-6b8a15318070" (UID: "9ed93d0d-0709-4425-b378-6b8a15318070"). InnerVolumeSpecName "kube-api-access-kls7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.130492 4619 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.130611 4619 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.130820 4619 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.130972 4619 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-node-log\") on node \"crc\" DevicePath \"\"" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.131093 4619 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-slash\") on node \"crc\" DevicePath \"\"" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.131196 4619 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.131326 4619 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.131431 4619 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.131558 4619 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.131686 4619 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-log-socket\") on node \"crc\" DevicePath \"\"" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.131793 4619 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.131894 4619 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9ed93d0d-0709-4425-b378-6b8a15318070-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.132047 4619 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9ed93d0d-0709-4425-b378-6b8a15318070-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.132161 4619 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.132257 4619 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.132360 4619 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.132478 4619 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9ed93d0d-0709-4425-b378-6b8a15318070-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.130747 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ed93d0d-0709-4425-b378-6b8a15318070-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "9ed93d0d-0709-4425-b378-6b8a15318070" (UID: "9ed93d0d-0709-4425-b378-6b8a15318070"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.140371 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "9ed93d0d-0709-4425-b378-6b8a15318070" (UID: "9ed93d0d-0709-4425-b378-6b8a15318070"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.193204 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6xtv_9ed93d0d-0709-4425-b378-6b8a15318070/ovnkube-controller/3.log" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.195126 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6xtv_9ed93d0d-0709-4425-b378-6b8a15318070/ovn-acl-logging/0.log" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.195552 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6xtv_9ed93d0d-0709-4425-b378-6b8a15318070/ovn-controller/0.log" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.195833 4619 generic.go:334] "Generic (PLEG): container finished" podID="9ed93d0d-0709-4425-b378-6b8a15318070" containerID="f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c" exitCode=0 Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.195859 4619 generic.go:334] "Generic (PLEG): container finished" podID="9ed93d0d-0709-4425-b378-6b8a15318070" containerID="7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee" exitCode=0 Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.195866 4619 generic.go:334] "Generic (PLEG): container finished" podID="9ed93d0d-0709-4425-b378-6b8a15318070" containerID="a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8" exitCode=0 Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.195872 4619 generic.go:334] "Generic (PLEG): container finished" podID="9ed93d0d-0709-4425-b378-6b8a15318070" containerID="732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e" exitCode=0 Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.195878 4619 generic.go:334] "Generic (PLEG): container finished" podID="9ed93d0d-0709-4425-b378-6b8a15318070" containerID="67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f" exitCode=0 Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.195883 4619 generic.go:334] "Generic (PLEG): container finished" podID="9ed93d0d-0709-4425-b378-6b8a15318070" containerID="cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958" exitCode=0 Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.195889 4619 generic.go:334] "Generic (PLEG): container finished" podID="9ed93d0d-0709-4425-b378-6b8a15318070" containerID="a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8" exitCode=143 Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.195896 4619 generic.go:334] "Generic (PLEG): container finished" podID="9ed93d0d-0709-4425-b378-6b8a15318070" containerID="8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e" exitCode=143 Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.195931 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" event={"ID":"9ed93d0d-0709-4425-b378-6b8a15318070","Type":"ContainerDied","Data":"f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.195959 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" event={"ID":"9ed93d0d-0709-4425-b378-6b8a15318070","Type":"ContainerDied","Data":"7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.195971 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" event={"ID":"9ed93d0d-0709-4425-b378-6b8a15318070","Type":"ContainerDied","Data":"a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.195980 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" event={"ID":"9ed93d0d-0709-4425-b378-6b8a15318070","Type":"ContainerDied","Data":"732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.195989 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" event={"ID":"9ed93d0d-0709-4425-b378-6b8a15318070","Type":"ContainerDied","Data":"67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.195997 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" event={"ID":"9ed93d0d-0709-4425-b378-6b8a15318070","Type":"ContainerDied","Data":"cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196006 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196015 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196020 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196026 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196030 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196035 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196040 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196045 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196050 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196056 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" event={"ID":"9ed93d0d-0709-4425-b378-6b8a15318070","Type":"ContainerDied","Data":"a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196064 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196070 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196076 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196081 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196086 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196090 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196096 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196101 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196106 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196111 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196117 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" event={"ID":"9ed93d0d-0709-4425-b378-6b8a15318070","Type":"ContainerDied","Data":"8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196125 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196131 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196135 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196141 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196145 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196150 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196155 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196160 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196165 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196170 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196176 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" event={"ID":"9ed93d0d-0709-4425-b378-6b8a15318070","Type":"ContainerDied","Data":"9c1a4d96df40284be58e1e9e411ed2c16033f6ffb216d751d60d1a69308707c0"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196184 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196190 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196195 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196200 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196205 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196210 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196216 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196222 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196228 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196235 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196250 4619 scope.go:117] "RemoveContainer" containerID="f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.196410 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b6xtv" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.199482 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-684hz_8aab93f8-6555-4389-b15c-9af458caa339/kube-multus/2.log" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.200779 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-684hz_8aab93f8-6555-4389-b15c-9af458caa339/kube-multus/1.log" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.200821 4619 generic.go:334] "Generic (PLEG): container finished" podID="8aab93f8-6555-4389-b15c-9af458caa339" containerID="9de21328a2d0a384fe90d901b86088a21223b1f1e6b5e9ddd903bef2fc5637db" exitCode=2 Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.200845 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-684hz" event={"ID":"8aab93f8-6555-4389-b15c-9af458caa339","Type":"ContainerDied","Data":"9de21328a2d0a384fe90d901b86088a21223b1f1e6b5e9ddd903bef2fc5637db"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.200877 4619 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bd2e081103d0219f3feb25e53258b23eb64cafeb27bf5b0c0c62ac1f92015406"} Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.201229 4619 scope.go:117] "RemoveContainer" containerID="9de21328a2d0a384fe90d901b86088a21223b1f1e6b5e9ddd903bef2fc5637db" Jan 26 11:05:39 crc kubenswrapper[4619]: E0126 11:05:39.201371 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-684hz_openshift-multus(8aab93f8-6555-4389-b15c-9af458caa339)\"" pod="openshift-multus/multus-684hz" podUID="8aab93f8-6555-4389-b15c-9af458caa339" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.218510 4619 scope.go:117] "RemoveContainer" containerID="7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.233383 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-run-systemd\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.233444 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-host-slash\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.233462 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-systemd-units\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.233511 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-log-socket\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.233543 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-host-run-ovn-kubernetes\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.233541 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-run-systemd\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.233570 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-systemd-units\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.233594 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/71ad1032-2923-4d44-86a5-68d9c56fc1b9-ovn-node-metrics-cert\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.233643 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-log-socket\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.233682 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-var-lib-openvswitch\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.233744 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-etc-openvswitch\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.233773 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/71ad1032-2923-4d44-86a5-68d9c56fc1b9-ovnkube-script-lib\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.233820 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kt2m\" (UniqueName: \"kubernetes.io/projected/71ad1032-2923-4d44-86a5-68d9c56fc1b9-kube-api-access-6kt2m\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.233846 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-node-log\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.233890 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-host-cni-netd\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.233951 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/71ad1032-2923-4d44-86a5-68d9c56fc1b9-ovnkube-config\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.233982 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-host-kubelet\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.234007 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-run-openvswitch\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.234031 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-host-cni-bin\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.234057 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/71ad1032-2923-4d44-86a5-68d9c56fc1b9-env-overrides\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.234093 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.234138 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-host-run-netns\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.234161 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-run-ovn\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.234205 4619 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9ed93d0d-0709-4425-b378-6b8a15318070-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.234220 4619 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9ed93d0d-0709-4425-b378-6b8a15318070-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.234235 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kls7n\" (UniqueName: \"kubernetes.io/projected/9ed93d0d-0709-4425-b378-6b8a15318070-kube-api-access-kls7n\") on node \"crc\" DevicePath \"\"" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.234268 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-run-ovn\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.234725 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.234762 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-run-openvswitch\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.234688 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-host-kubelet\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.234817 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-host-cni-bin\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.235068 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-host-run-netns\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.233543 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-host-slash\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.235153 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-var-lib-openvswitch\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.233686 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-host-run-ovn-kubernetes\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.237117 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-node-log\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.237673 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-host-cni-netd\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.237710 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/71ad1032-2923-4d44-86a5-68d9c56fc1b9-etc-openvswitch\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.243388 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/71ad1032-2923-4d44-86a5-68d9c56fc1b9-ovnkube-config\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.243849 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/71ad1032-2923-4d44-86a5-68d9c56fc1b9-env-overrides\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.243999 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/71ad1032-2923-4d44-86a5-68d9c56fc1b9-ovnkube-script-lib\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.244095 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/71ad1032-2923-4d44-86a5-68d9c56fc1b9-ovn-node-metrics-cert\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.249662 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-b6xtv"] Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.255903 4619 scope.go:117] "RemoveContainer" containerID="7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee" Jan 26 11:05:39 crc kubenswrapper[4619]: E0126 11:05:39.256545 4619 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ed93d0d_0709_4425_b378_6b8a15318070.slice/crio-9c1a4d96df40284be58e1e9e411ed2c16033f6ffb216d751d60d1a69308707c0\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ed93d0d_0709_4425_b378_6b8a15318070.slice\": RecentStats: unable to find data in memory cache]" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.259128 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kt2m\" (UniqueName: \"kubernetes.io/projected/71ad1032-2923-4d44-86a5-68d9c56fc1b9-kube-api-access-6kt2m\") pod \"ovnkube-node-w8854\" (UID: \"71ad1032-2923-4d44-86a5-68d9c56fc1b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.259746 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-b6xtv"] Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.269134 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ed93d0d-0709-4425-b378-6b8a15318070" path="/var/lib/kubelet/pods/9ed93d0d-0709-4425-b378-6b8a15318070/volumes" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.271457 4619 scope.go:117] "RemoveContainer" containerID="a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.283648 4619 scope.go:117] "RemoveContainer" containerID="732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.293787 4619 scope.go:117] "RemoveContainer" containerID="67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.309824 4619 scope.go:117] "RemoveContainer" containerID="cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.333552 4619 scope.go:117] "RemoveContainer" containerID="a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.344965 4619 scope.go:117] "RemoveContainer" containerID="8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.358476 4619 scope.go:117] "RemoveContainer" containerID="04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.369507 4619 scope.go:117] "RemoveContainer" containerID="f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c" Jan 26 11:05:39 crc kubenswrapper[4619]: E0126 11:05:39.369897 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c\": container with ID starting with f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c not found: ID does not exist" containerID="f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.369988 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c"} err="failed to get container status \"f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c\": rpc error: code = NotFound desc = could not find container \"f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c\": container with ID starting with f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.370039 4619 scope.go:117] "RemoveContainer" containerID="7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea" Jan 26 11:05:39 crc kubenswrapper[4619]: E0126 11:05:39.370241 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea\": container with ID starting with 7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea not found: ID does not exist" containerID="7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.370314 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea"} err="failed to get container status \"7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea\": rpc error: code = NotFound desc = could not find container \"7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea\": container with ID starting with 7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.370335 4619 scope.go:117] "RemoveContainer" containerID="7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee" Jan 26 11:05:39 crc kubenswrapper[4619]: E0126 11:05:39.370749 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\": container with ID starting with 7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee not found: ID does not exist" containerID="7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.370789 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee"} err="failed to get container status \"7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\": rpc error: code = NotFound desc = could not find container \"7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\": container with ID starting with 7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.370832 4619 scope.go:117] "RemoveContainer" containerID="a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8" Jan 26 11:05:39 crc kubenswrapper[4619]: E0126 11:05:39.371245 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\": container with ID starting with a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8 not found: ID does not exist" containerID="a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.371308 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8"} err="failed to get container status \"a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\": rpc error: code = NotFound desc = could not find container \"a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\": container with ID starting with a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8 not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.371350 4619 scope.go:117] "RemoveContainer" containerID="732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e" Jan 26 11:05:39 crc kubenswrapper[4619]: E0126 11:05:39.371807 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\": container with ID starting with 732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e not found: ID does not exist" containerID="732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.371831 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e"} err="failed to get container status \"732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\": rpc error: code = NotFound desc = could not find container \"732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\": container with ID starting with 732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.371875 4619 scope.go:117] "RemoveContainer" containerID="67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f" Jan 26 11:05:39 crc kubenswrapper[4619]: E0126 11:05:39.372192 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\": container with ID starting with 67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f not found: ID does not exist" containerID="67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.372223 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f"} err="failed to get container status \"67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\": rpc error: code = NotFound desc = could not find container \"67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\": container with ID starting with 67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.372238 4619 scope.go:117] "RemoveContainer" containerID="cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958" Jan 26 11:05:39 crc kubenswrapper[4619]: E0126 11:05:39.372472 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\": container with ID starting with cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958 not found: ID does not exist" containerID="cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.372495 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958"} err="failed to get container status \"cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\": rpc error: code = NotFound desc = could not find container \"cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\": container with ID starting with cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958 not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.372508 4619 scope.go:117] "RemoveContainer" containerID="a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8" Jan 26 11:05:39 crc kubenswrapper[4619]: E0126 11:05:39.372797 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\": container with ID starting with a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8 not found: ID does not exist" containerID="a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.372839 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8"} err="failed to get container status \"a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\": rpc error: code = NotFound desc = could not find container \"a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\": container with ID starting with a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8 not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.372860 4619 scope.go:117] "RemoveContainer" containerID="8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e" Jan 26 11:05:39 crc kubenswrapper[4619]: E0126 11:05:39.373178 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\": container with ID starting with 8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e not found: ID does not exist" containerID="8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.373221 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e"} err="failed to get container status \"8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\": rpc error: code = NotFound desc = could not find container \"8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\": container with ID starting with 8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.373242 4619 scope.go:117] "RemoveContainer" containerID="04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139" Jan 26 11:05:39 crc kubenswrapper[4619]: E0126 11:05:39.373542 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\": container with ID starting with 04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139 not found: ID does not exist" containerID="04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.373565 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139"} err="failed to get container status \"04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\": rpc error: code = NotFound desc = could not find container \"04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\": container with ID starting with 04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139 not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.373579 4619 scope.go:117] "RemoveContainer" containerID="f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.373813 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c"} err="failed to get container status \"f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c\": rpc error: code = NotFound desc = could not find container \"f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c\": container with ID starting with f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.373830 4619 scope.go:117] "RemoveContainer" containerID="7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.374039 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea"} err="failed to get container status \"7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea\": rpc error: code = NotFound desc = could not find container \"7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea\": container with ID starting with 7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.374058 4619 scope.go:117] "RemoveContainer" containerID="7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.374275 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee"} err="failed to get container status \"7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\": rpc error: code = NotFound desc = could not find container \"7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\": container with ID starting with 7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.374293 4619 scope.go:117] "RemoveContainer" containerID="a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.374471 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8"} err="failed to get container status \"a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\": rpc error: code = NotFound desc = could not find container \"a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\": container with ID starting with a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8 not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.374488 4619 scope.go:117] "RemoveContainer" containerID="732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.374711 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e"} err="failed to get container status \"732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\": rpc error: code = NotFound desc = could not find container \"732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\": container with ID starting with 732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.374728 4619 scope.go:117] "RemoveContainer" containerID="67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.374929 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f"} err="failed to get container status \"67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\": rpc error: code = NotFound desc = could not find container \"67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\": container with ID starting with 67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.374948 4619 scope.go:117] "RemoveContainer" containerID="cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.375171 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958"} err="failed to get container status \"cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\": rpc error: code = NotFound desc = could not find container \"cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\": container with ID starting with cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958 not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.375189 4619 scope.go:117] "RemoveContainer" containerID="a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.375423 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8"} err="failed to get container status \"a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\": rpc error: code = NotFound desc = could not find container \"a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\": container with ID starting with a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8 not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.375438 4619 scope.go:117] "RemoveContainer" containerID="8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.375609 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e"} err="failed to get container status \"8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\": rpc error: code = NotFound desc = could not find container \"8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\": container with ID starting with 8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.375639 4619 scope.go:117] "RemoveContainer" containerID="04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.375859 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139"} err="failed to get container status \"04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\": rpc error: code = NotFound desc = could not find container \"04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\": container with ID starting with 04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139 not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.375874 4619 scope.go:117] "RemoveContainer" containerID="f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.376016 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c"} err="failed to get container status \"f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c\": rpc error: code = NotFound desc = could not find container \"f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c\": container with ID starting with f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.376029 4619 scope.go:117] "RemoveContainer" containerID="7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.376200 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea"} err="failed to get container status \"7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea\": rpc error: code = NotFound desc = could not find container \"7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea\": container with ID starting with 7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.376216 4619 scope.go:117] "RemoveContainer" containerID="7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.376389 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee"} err="failed to get container status \"7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\": rpc error: code = NotFound desc = could not find container \"7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\": container with ID starting with 7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.376404 4619 scope.go:117] "RemoveContainer" containerID="a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.376685 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8"} err="failed to get container status \"a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\": rpc error: code = NotFound desc = could not find container \"a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\": container with ID starting with a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8 not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.376699 4619 scope.go:117] "RemoveContainer" containerID="732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.376848 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e"} err="failed to get container status \"732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\": rpc error: code = NotFound desc = could not find container \"732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\": container with ID starting with 732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.376863 4619 scope.go:117] "RemoveContainer" containerID="67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.376997 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f"} err="failed to get container status \"67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\": rpc error: code = NotFound desc = could not find container \"67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\": container with ID starting with 67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.377011 4619 scope.go:117] "RemoveContainer" containerID="cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.377262 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958"} err="failed to get container status \"cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\": rpc error: code = NotFound desc = could not find container \"cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\": container with ID starting with cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958 not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.377278 4619 scope.go:117] "RemoveContainer" containerID="a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.377944 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8"} err="failed to get container status \"a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\": rpc error: code = NotFound desc = could not find container \"a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\": container with ID starting with a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8 not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.377962 4619 scope.go:117] "RemoveContainer" containerID="8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.378131 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e"} err="failed to get container status \"8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\": rpc error: code = NotFound desc = could not find container \"8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\": container with ID starting with 8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.378146 4619 scope.go:117] "RemoveContainer" containerID="04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.378410 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139"} err="failed to get container status \"04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\": rpc error: code = NotFound desc = could not find container \"04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\": container with ID starting with 04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139 not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.378426 4619 scope.go:117] "RemoveContainer" containerID="f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.378687 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c"} err="failed to get container status \"f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c\": rpc error: code = NotFound desc = could not find container \"f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c\": container with ID starting with f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.378703 4619 scope.go:117] "RemoveContainer" containerID="7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.378876 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea"} err="failed to get container status \"7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea\": rpc error: code = NotFound desc = could not find container \"7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea\": container with ID starting with 7c726d54500a2f437747c2d97eb732c4ad7bbc7e8f3206f97e2c330493275aea not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.378891 4619 scope.go:117] "RemoveContainer" containerID="7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.379209 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee"} err="failed to get container status \"7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\": rpc error: code = NotFound desc = could not find container \"7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee\": container with ID starting with 7d34b3bfaf1109987715b587b43c2cccf20cfc72bfc061e81895d061aa5e05ee not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.379224 4619 scope.go:117] "RemoveContainer" containerID="a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.379367 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8"} err="failed to get container status \"a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\": rpc error: code = NotFound desc = could not find container \"a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8\": container with ID starting with a4359bc3282070484f06564526e399a3c9177b4c98fa641ccf608b1bfd0d1ef8 not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.379383 4619 scope.go:117] "RemoveContainer" containerID="732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.379582 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e"} err="failed to get container status \"732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\": rpc error: code = NotFound desc = could not find container \"732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e\": container with ID starting with 732cd9d8b799fd3b78e3b9caff1c67191517dd837772f7758c35d68c6b07e69e not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.379596 4619 scope.go:117] "RemoveContainer" containerID="67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.379758 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f"} err="failed to get container status \"67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\": rpc error: code = NotFound desc = could not find container \"67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f\": container with ID starting with 67d9cc88a837c895df18beabd4cd250b3366d0e3133f9c738061aeb09314f86f not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.379774 4619 scope.go:117] "RemoveContainer" containerID="cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.379905 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958"} err="failed to get container status \"cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\": rpc error: code = NotFound desc = could not find container \"cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958\": container with ID starting with cb6e2c0743a7efa9410208a5a71eb0d87117cfbccd3a1be905c25a536a37d958 not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.379921 4619 scope.go:117] "RemoveContainer" containerID="a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.380044 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8"} err="failed to get container status \"a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\": rpc error: code = NotFound desc = could not find container \"a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8\": container with ID starting with a9d1a386a5d8683a53afb7c025dcdb4721c34899b39c82845d7079bf35e039e8 not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.380059 4619 scope.go:117] "RemoveContainer" containerID="8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.380184 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e"} err="failed to get container status \"8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\": rpc error: code = NotFound desc = could not find container \"8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e\": container with ID starting with 8ab241ccf31ea9a063300e6085a7aae04ca47a4393dda04eea169ed917906d6e not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.380205 4619 scope.go:117] "RemoveContainer" containerID="04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.380351 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139"} err="failed to get container status \"04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\": rpc error: code = NotFound desc = could not find container \"04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139\": container with ID starting with 04d136a4c612278fb0ef11350e80716b5a2997400baa4341b16df931e0521139 not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.380367 4619 scope.go:117] "RemoveContainer" containerID="f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.380493 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c"} err="failed to get container status \"f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c\": rpc error: code = NotFound desc = could not find container \"f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c\": container with ID starting with f27fb7685eb76cfade28ba85e4d5c4689acea5104a62d652b1160793ecc64f9c not found: ID does not exist" Jan 26 11:05:39 crc kubenswrapper[4619]: I0126 11:05:39.411956 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:39 crc kubenswrapper[4619]: W0126 11:05:39.425427 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71ad1032_2923_4d44_86a5_68d9c56fc1b9.slice/crio-be33de925dc3dc04ee7615cdb59888e025b641c4bb5918452d0371b7a9d87508 WatchSource:0}: Error finding container be33de925dc3dc04ee7615cdb59888e025b641c4bb5918452d0371b7a9d87508: Status 404 returned error can't find the container with id be33de925dc3dc04ee7615cdb59888e025b641c4bb5918452d0371b7a9d87508 Jan 26 11:05:40 crc kubenswrapper[4619]: I0126 11:05:40.209496 4619 generic.go:334] "Generic (PLEG): container finished" podID="71ad1032-2923-4d44-86a5-68d9c56fc1b9" containerID="f339c7fc2dd79ff53a2ac52a40393d4eaa6d14925f89206816a88da9502b83bd" exitCode=0 Jan 26 11:05:40 crc kubenswrapper[4619]: I0126 11:05:40.209540 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8854" event={"ID":"71ad1032-2923-4d44-86a5-68d9c56fc1b9","Type":"ContainerDied","Data":"f339c7fc2dd79ff53a2ac52a40393d4eaa6d14925f89206816a88da9502b83bd"} Jan 26 11:05:40 crc kubenswrapper[4619]: I0126 11:05:40.209605 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8854" event={"ID":"71ad1032-2923-4d44-86a5-68d9c56fc1b9","Type":"ContainerStarted","Data":"be33de925dc3dc04ee7615cdb59888e025b641c4bb5918452d0371b7a9d87508"} Jan 26 11:05:41 crc kubenswrapper[4619]: I0126 11:05:41.222731 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8854" event={"ID":"71ad1032-2923-4d44-86a5-68d9c56fc1b9","Type":"ContainerStarted","Data":"09c3b15854d529920de02a393e83b57ddd0b38caf03ecba7f75ec492d3bfacca"} Jan 26 11:05:41 crc kubenswrapper[4619]: I0126 11:05:41.223090 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8854" event={"ID":"71ad1032-2923-4d44-86a5-68d9c56fc1b9","Type":"ContainerStarted","Data":"82eafd7a015faa70795490fc0122d71e6d8ce2d5959ff33a80ad82da01bbd50d"} Jan 26 11:05:41 crc kubenswrapper[4619]: I0126 11:05:41.223102 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8854" event={"ID":"71ad1032-2923-4d44-86a5-68d9c56fc1b9","Type":"ContainerStarted","Data":"2350bb2d347b9e66bcbf7e7444cbae2fdd048871a3eaedbeef667d5645dbb349"} Jan 26 11:05:41 crc kubenswrapper[4619]: I0126 11:05:41.223111 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8854" event={"ID":"71ad1032-2923-4d44-86a5-68d9c56fc1b9","Type":"ContainerStarted","Data":"4d994a881b354baf648aee722c1ae440a29c018b2cc07a9297c8bf7e505966ed"} Jan 26 11:05:41 crc kubenswrapper[4619]: I0126 11:05:41.223121 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8854" event={"ID":"71ad1032-2923-4d44-86a5-68d9c56fc1b9","Type":"ContainerStarted","Data":"6d6d934b6e64a478aeb0cd844b4abc0fb0994abaf4d5b2b724241488c35ee070"} Jan 26 11:05:42 crc kubenswrapper[4619]: I0126 11:05:42.237451 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8854" event={"ID":"71ad1032-2923-4d44-86a5-68d9c56fc1b9","Type":"ContainerStarted","Data":"3f299041a54d61eeb177d01abd810f32200a389e585da9ed3d7eeb48a2206ac0"} Jan 26 11:05:44 crc kubenswrapper[4619]: I0126 11:05:44.254292 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8854" event={"ID":"71ad1032-2923-4d44-86a5-68d9c56fc1b9","Type":"ContainerStarted","Data":"2ad8bd24bb645d90b8b31efc5fcacc01e7eb20acd52bb78a408b2b9e7de40f0b"} Jan 26 11:05:46 crc kubenswrapper[4619]: I0126 11:05:46.270285 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8854" event={"ID":"71ad1032-2923-4d44-86a5-68d9c56fc1b9","Type":"ContainerStarted","Data":"9a323cb555fe571b33866e5137872175dcea0dbaff91c06c0b0075b56f8ea6e7"} Jan 26 11:05:46 crc kubenswrapper[4619]: I0126 11:05:46.270637 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:46 crc kubenswrapper[4619]: I0126 11:05:46.300544 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:46 crc kubenswrapper[4619]: I0126 11:05:46.306357 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-w8854" podStartSLOduration=7.30634851 podStartE2EDuration="7.30634851s" podCreationTimestamp="2026-01-26 11:05:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:05:46.304542363 +0000 UTC m=+645.338583089" watchObservedRunningTime="2026-01-26 11:05:46.30634851 +0000 UTC m=+645.340389226" Jan 26 11:05:47 crc kubenswrapper[4619]: I0126 11:05:47.274035 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:47 crc kubenswrapper[4619]: I0126 11:05:47.275462 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:47 crc kubenswrapper[4619]: I0126 11:05:47.313925 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:05:51 crc kubenswrapper[4619]: I0126 11:05:51.265887 4619 scope.go:117] "RemoveContainer" containerID="9de21328a2d0a384fe90d901b86088a21223b1f1e6b5e9ddd903bef2fc5637db" Jan 26 11:05:51 crc kubenswrapper[4619]: E0126 11:05:51.268400 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-684hz_openshift-multus(8aab93f8-6555-4389-b15c-9af458caa339)\"" pod="openshift-multus/multus-684hz" podUID="8aab93f8-6555-4389-b15c-9af458caa339" Jan 26 11:06:01 crc kubenswrapper[4619]: I0126 11:06:01.517925 4619 scope.go:117] "RemoveContainer" containerID="bd2e081103d0219f3feb25e53258b23eb64cafeb27bf5b0c0c62ac1f92015406" Jan 26 11:06:02 crc kubenswrapper[4619]: I0126 11:06:02.261154 4619 scope.go:117] "RemoveContainer" containerID="9de21328a2d0a384fe90d901b86088a21223b1f1e6b5e9ddd903bef2fc5637db" Jan 26 11:06:02 crc kubenswrapper[4619]: I0126 11:06:02.370209 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-684hz_8aab93f8-6555-4389-b15c-9af458caa339/kube-multus/2.log" Jan 26 11:06:03 crc kubenswrapper[4619]: I0126 11:06:03.380150 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-684hz_8aab93f8-6555-4389-b15c-9af458caa339/kube-multus/2.log" Jan 26 11:06:03 crc kubenswrapper[4619]: I0126 11:06:03.380209 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-684hz" event={"ID":"8aab93f8-6555-4389-b15c-9af458caa339","Type":"ContainerStarted","Data":"e6e0c8e98619586edcf2987bcc8f54354bfc8c1888452ca4984bdd76c2f650ca"} Jan 26 11:06:09 crc kubenswrapper[4619]: I0126 11:06:09.449704 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-w8854" Jan 26 11:06:20 crc kubenswrapper[4619]: I0126 11:06:20.631190 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr"] Jan 26 11:06:20 crc kubenswrapper[4619]: I0126 11:06:20.633374 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr" Jan 26 11:06:20 crc kubenswrapper[4619]: I0126 11:06:20.635481 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 11:06:20 crc kubenswrapper[4619]: I0126 11:06:20.644581 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr"] Jan 26 11:06:20 crc kubenswrapper[4619]: I0126 11:06:20.724487 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5js6s\" (UniqueName: \"kubernetes.io/projected/803f8495-c340-44e0-9b75-18fc9a944fd7-kube-api-access-5js6s\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr\" (UID: \"803f8495-c340-44e0-9b75-18fc9a944fd7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr" Jan 26 11:06:20 crc kubenswrapper[4619]: I0126 11:06:20.724538 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/803f8495-c340-44e0-9b75-18fc9a944fd7-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr\" (UID: \"803f8495-c340-44e0-9b75-18fc9a944fd7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr" Jan 26 11:06:20 crc kubenswrapper[4619]: I0126 11:06:20.724596 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/803f8495-c340-44e0-9b75-18fc9a944fd7-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr\" (UID: \"803f8495-c340-44e0-9b75-18fc9a944fd7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr" Jan 26 11:06:20 crc kubenswrapper[4619]: I0126 11:06:20.826104 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/803f8495-c340-44e0-9b75-18fc9a944fd7-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr\" (UID: \"803f8495-c340-44e0-9b75-18fc9a944fd7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr" Jan 26 11:06:20 crc kubenswrapper[4619]: I0126 11:06:20.826192 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5js6s\" (UniqueName: \"kubernetes.io/projected/803f8495-c340-44e0-9b75-18fc9a944fd7-kube-api-access-5js6s\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr\" (UID: \"803f8495-c340-44e0-9b75-18fc9a944fd7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr" Jan 26 11:06:20 crc kubenswrapper[4619]: I0126 11:06:20.826218 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/803f8495-c340-44e0-9b75-18fc9a944fd7-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr\" (UID: \"803f8495-c340-44e0-9b75-18fc9a944fd7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr" Jan 26 11:06:20 crc kubenswrapper[4619]: I0126 11:06:20.826801 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/803f8495-c340-44e0-9b75-18fc9a944fd7-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr\" (UID: \"803f8495-c340-44e0-9b75-18fc9a944fd7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr" Jan 26 11:06:20 crc kubenswrapper[4619]: I0126 11:06:20.826833 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/803f8495-c340-44e0-9b75-18fc9a944fd7-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr\" (UID: \"803f8495-c340-44e0-9b75-18fc9a944fd7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr" Jan 26 11:06:20 crc kubenswrapper[4619]: I0126 11:06:20.850770 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5js6s\" (UniqueName: \"kubernetes.io/projected/803f8495-c340-44e0-9b75-18fc9a944fd7-kube-api-access-5js6s\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr\" (UID: \"803f8495-c340-44e0-9b75-18fc9a944fd7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr" Jan 26 11:06:20 crc kubenswrapper[4619]: I0126 11:06:20.973462 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr" Jan 26 11:06:21 crc kubenswrapper[4619]: I0126 11:06:21.407680 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr"] Jan 26 11:06:21 crc kubenswrapper[4619]: I0126 11:06:21.501788 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr" event={"ID":"803f8495-c340-44e0-9b75-18fc9a944fd7","Type":"ContainerStarted","Data":"76db41c7be86ad52fb8b5651f6af9e3fe42b685211a69275bec7fad17152b9af"} Jan 26 11:06:22 crc kubenswrapper[4619]: I0126 11:06:22.509801 4619 generic.go:334] "Generic (PLEG): container finished" podID="803f8495-c340-44e0-9b75-18fc9a944fd7" containerID="6f8597d272548aa9b003bda6cf737a0dfe3f150a1a0e3e5d4d38107216f2d6a2" exitCode=0 Jan 26 11:06:22 crc kubenswrapper[4619]: I0126 11:06:22.510117 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr" event={"ID":"803f8495-c340-44e0-9b75-18fc9a944fd7","Type":"ContainerDied","Data":"6f8597d272548aa9b003bda6cf737a0dfe3f150a1a0e3e5d4d38107216f2d6a2"} Jan 26 11:06:24 crc kubenswrapper[4619]: I0126 11:06:24.522110 4619 generic.go:334] "Generic (PLEG): container finished" podID="803f8495-c340-44e0-9b75-18fc9a944fd7" containerID="71ec144efe69ca2619c5fc184f7988f392f8a5157b12a7edd8ce44abb93b4c1e" exitCode=0 Jan 26 11:06:24 crc kubenswrapper[4619]: I0126 11:06:24.522185 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr" event={"ID":"803f8495-c340-44e0-9b75-18fc9a944fd7","Type":"ContainerDied","Data":"71ec144efe69ca2619c5fc184f7988f392f8a5157b12a7edd8ce44abb93b4c1e"} Jan 26 11:06:25 crc kubenswrapper[4619]: I0126 11:06:25.532997 4619 generic.go:334] "Generic (PLEG): container finished" podID="803f8495-c340-44e0-9b75-18fc9a944fd7" containerID="cd8f86348c1c597e3957c10877cfe8328738ee241da7c500a22ed58fbef2ea5c" exitCode=0 Jan 26 11:06:25 crc kubenswrapper[4619]: I0126 11:06:25.533077 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr" event={"ID":"803f8495-c340-44e0-9b75-18fc9a944fd7","Type":"ContainerDied","Data":"cd8f86348c1c597e3957c10877cfe8328738ee241da7c500a22ed58fbef2ea5c"} Jan 26 11:06:26 crc kubenswrapper[4619]: I0126 11:06:26.848104 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr" Jan 26 11:06:26 crc kubenswrapper[4619]: I0126 11:06:26.907430 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/803f8495-c340-44e0-9b75-18fc9a944fd7-bundle\") pod \"803f8495-c340-44e0-9b75-18fc9a944fd7\" (UID: \"803f8495-c340-44e0-9b75-18fc9a944fd7\") " Jan 26 11:06:26 crc kubenswrapper[4619]: I0126 11:06:26.907515 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5js6s\" (UniqueName: \"kubernetes.io/projected/803f8495-c340-44e0-9b75-18fc9a944fd7-kube-api-access-5js6s\") pod \"803f8495-c340-44e0-9b75-18fc9a944fd7\" (UID: \"803f8495-c340-44e0-9b75-18fc9a944fd7\") " Jan 26 11:06:26 crc kubenswrapper[4619]: I0126 11:06:26.907541 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/803f8495-c340-44e0-9b75-18fc9a944fd7-util\") pod \"803f8495-c340-44e0-9b75-18fc9a944fd7\" (UID: \"803f8495-c340-44e0-9b75-18fc9a944fd7\") " Jan 26 11:06:26 crc kubenswrapper[4619]: I0126 11:06:26.908089 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/803f8495-c340-44e0-9b75-18fc9a944fd7-bundle" (OuterVolumeSpecName: "bundle") pod "803f8495-c340-44e0-9b75-18fc9a944fd7" (UID: "803f8495-c340-44e0-9b75-18fc9a944fd7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:06:26 crc kubenswrapper[4619]: I0126 11:06:26.915804 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/803f8495-c340-44e0-9b75-18fc9a944fd7-kube-api-access-5js6s" (OuterVolumeSpecName: "kube-api-access-5js6s") pod "803f8495-c340-44e0-9b75-18fc9a944fd7" (UID: "803f8495-c340-44e0-9b75-18fc9a944fd7"). InnerVolumeSpecName "kube-api-access-5js6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:06:27 crc kubenswrapper[4619]: I0126 11:06:27.009031 4619 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/803f8495-c340-44e0-9b75-18fc9a944fd7-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:06:27 crc kubenswrapper[4619]: I0126 11:06:27.009068 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5js6s\" (UniqueName: \"kubernetes.io/projected/803f8495-c340-44e0-9b75-18fc9a944fd7-kube-api-access-5js6s\") on node \"crc\" DevicePath \"\"" Jan 26 11:06:27 crc kubenswrapper[4619]: I0126 11:06:27.188929 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/803f8495-c340-44e0-9b75-18fc9a944fd7-util" (OuterVolumeSpecName: "util") pod "803f8495-c340-44e0-9b75-18fc9a944fd7" (UID: "803f8495-c340-44e0-9b75-18fc9a944fd7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:06:27 crc kubenswrapper[4619]: I0126 11:06:27.210842 4619 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/803f8495-c340-44e0-9b75-18fc9a944fd7-util\") on node \"crc\" DevicePath \"\"" Jan 26 11:06:27 crc kubenswrapper[4619]: I0126 11:06:27.550767 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr" event={"ID":"803f8495-c340-44e0-9b75-18fc9a944fd7","Type":"ContainerDied","Data":"76db41c7be86ad52fb8b5651f6af9e3fe42b685211a69275bec7fad17152b9af"} Jan 26 11:06:27 crc kubenswrapper[4619]: I0126 11:06:27.550838 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76db41c7be86ad52fb8b5651f6af9e3fe42b685211a69275bec7fad17152b9af" Jan 26 11:06:27 crc kubenswrapper[4619]: I0126 11:06:27.550970 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr" Jan 26 11:06:32 crc kubenswrapper[4619]: I0126 11:06:32.175711 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-cnsxv"] Jan 26 11:06:32 crc kubenswrapper[4619]: E0126 11:06:32.176488 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="803f8495-c340-44e0-9b75-18fc9a944fd7" containerName="util" Jan 26 11:06:32 crc kubenswrapper[4619]: I0126 11:06:32.176505 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="803f8495-c340-44e0-9b75-18fc9a944fd7" containerName="util" Jan 26 11:06:32 crc kubenswrapper[4619]: E0126 11:06:32.176516 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="803f8495-c340-44e0-9b75-18fc9a944fd7" containerName="pull" Jan 26 11:06:32 crc kubenswrapper[4619]: I0126 11:06:32.176523 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="803f8495-c340-44e0-9b75-18fc9a944fd7" containerName="pull" Jan 26 11:06:32 crc kubenswrapper[4619]: E0126 11:06:32.176539 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="803f8495-c340-44e0-9b75-18fc9a944fd7" containerName="extract" Jan 26 11:06:32 crc kubenswrapper[4619]: I0126 11:06:32.176547 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="803f8495-c340-44e0-9b75-18fc9a944fd7" containerName="extract" Jan 26 11:06:32 crc kubenswrapper[4619]: I0126 11:06:32.176703 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="803f8495-c340-44e0-9b75-18fc9a944fd7" containerName="extract" Jan 26 11:06:32 crc kubenswrapper[4619]: I0126 11:06:32.177167 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-cnsxv" Jan 26 11:06:32 crc kubenswrapper[4619]: I0126 11:06:32.179689 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-bwcl4" Jan 26 11:06:32 crc kubenswrapper[4619]: I0126 11:06:32.179931 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 26 11:06:32 crc kubenswrapper[4619]: I0126 11:06:32.181653 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 26 11:06:32 crc kubenswrapper[4619]: I0126 11:06:32.221949 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-cnsxv"] Jan 26 11:06:32 crc kubenswrapper[4619]: I0126 11:06:32.279082 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqbl8\" (UniqueName: \"kubernetes.io/projected/a05d614f-24d2-4005-9110-1a002d0670ae-kube-api-access-jqbl8\") pod \"nmstate-operator-646758c888-cnsxv\" (UID: \"a05d614f-24d2-4005-9110-1a002d0670ae\") " pod="openshift-nmstate/nmstate-operator-646758c888-cnsxv" Jan 26 11:06:32 crc kubenswrapper[4619]: I0126 11:06:32.380626 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqbl8\" (UniqueName: \"kubernetes.io/projected/a05d614f-24d2-4005-9110-1a002d0670ae-kube-api-access-jqbl8\") pod \"nmstate-operator-646758c888-cnsxv\" (UID: \"a05d614f-24d2-4005-9110-1a002d0670ae\") " pod="openshift-nmstate/nmstate-operator-646758c888-cnsxv" Jan 26 11:06:32 crc kubenswrapper[4619]: I0126 11:06:32.400040 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqbl8\" (UniqueName: \"kubernetes.io/projected/a05d614f-24d2-4005-9110-1a002d0670ae-kube-api-access-jqbl8\") pod \"nmstate-operator-646758c888-cnsxv\" (UID: \"a05d614f-24d2-4005-9110-1a002d0670ae\") " pod="openshift-nmstate/nmstate-operator-646758c888-cnsxv" Jan 26 11:06:32 crc kubenswrapper[4619]: I0126 11:06:32.492160 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-cnsxv" Jan 26 11:06:32 crc kubenswrapper[4619]: I0126 11:06:32.746099 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-cnsxv"] Jan 26 11:06:33 crc kubenswrapper[4619]: I0126 11:06:33.583272 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-cnsxv" event={"ID":"a05d614f-24d2-4005-9110-1a002d0670ae","Type":"ContainerStarted","Data":"8fdaf410958dacebd98a63461cdd9f5619b67c9359c43c49f5de404da482e0d8"} Jan 26 11:06:35 crc kubenswrapper[4619]: I0126 11:06:35.597011 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-cnsxv" event={"ID":"a05d614f-24d2-4005-9110-1a002d0670ae","Type":"ContainerStarted","Data":"9d126767ac1e6407d9199e215e9bcc67c8df6b0fe547a15dde4a8e88496c5501"} Jan 26 11:06:35 crc kubenswrapper[4619]: I0126 11:06:35.630048 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-cnsxv" podStartSLOduration=1.6598232579999999 podStartE2EDuration="3.630023756s" podCreationTimestamp="2026-01-26 11:06:32 +0000 UTC" firstStartedPulling="2026-01-26 11:06:32.754881517 +0000 UTC m=+691.788922223" lastFinishedPulling="2026-01-26 11:06:34.725082005 +0000 UTC m=+693.759122721" observedRunningTime="2026-01-26 11:06:35.629897342 +0000 UTC m=+694.663938088" watchObservedRunningTime="2026-01-26 11:06:35.630023756 +0000 UTC m=+694.664064512" Jan 26 11:06:40 crc kubenswrapper[4619]: I0126 11:06:40.985752 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-nh758"] Jan 26 11:06:40 crc kubenswrapper[4619]: I0126 11:06:40.986736 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-nh758" Jan 26 11:06:40 crc kubenswrapper[4619]: I0126 11:06:40.988667 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-wgwnd" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.002380 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-nh758"] Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.016286 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-56j7m"] Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.017118 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-56j7m" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.019282 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.053081 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-9k8lb"] Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.053821 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-9k8lb" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.056531 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-56j7m"] Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.091341 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxxcp\" (UniqueName: \"kubernetes.io/projected/b2d70641-12a7-4923-8fa0-f09a91915630-kube-api-access-gxxcp\") pod \"nmstate-metrics-54757c584b-nh758\" (UID: \"b2d70641-12a7-4923-8fa0-f09a91915630\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-nh758" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.091397 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b2e020f6-4ac4-407d-9eb9-96f1072d01ab-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-56j7m\" (UID: \"b2e020f6-4ac4-407d-9eb9-96f1072d01ab\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-56j7m" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.091430 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qlj6\" (UniqueName: \"kubernetes.io/projected/b8830012-0a44-4256-b546-b00b81d136cf-kube-api-access-8qlj6\") pod \"nmstate-handler-9k8lb\" (UID: \"b8830012-0a44-4256-b546-b00b81d136cf\") " pod="openshift-nmstate/nmstate-handler-9k8lb" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.091449 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b8830012-0a44-4256-b546-b00b81d136cf-nmstate-lock\") pod \"nmstate-handler-9k8lb\" (UID: \"b8830012-0a44-4256-b546-b00b81d136cf\") " pod="openshift-nmstate/nmstate-handler-9k8lb" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.091469 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xncrb\" (UniqueName: \"kubernetes.io/projected/b2e020f6-4ac4-407d-9eb9-96f1072d01ab-kube-api-access-xncrb\") pod \"nmstate-webhook-8474b5b9d8-56j7m\" (UID: \"b2e020f6-4ac4-407d-9eb9-96f1072d01ab\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-56j7m" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.091485 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b8830012-0a44-4256-b546-b00b81d136cf-ovs-socket\") pod \"nmstate-handler-9k8lb\" (UID: \"b8830012-0a44-4256-b546-b00b81d136cf\") " pod="openshift-nmstate/nmstate-handler-9k8lb" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.091503 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b8830012-0a44-4256-b546-b00b81d136cf-dbus-socket\") pod \"nmstate-handler-9k8lb\" (UID: \"b8830012-0a44-4256-b546-b00b81d136cf\") " pod="openshift-nmstate/nmstate-handler-9k8lb" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.144312 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-f7qjj"] Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.144950 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-f7qjj" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.147077 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.147178 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.147249 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-2hwr4" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.159964 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-f7qjj"] Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.192066 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9ffa32f9-021a-405b-920e-5fb684f8d8e4-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-f7qjj\" (UID: \"9ffa32f9-021a-405b-920e-5fb684f8d8e4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-f7qjj" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.192129 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxxcp\" (UniqueName: \"kubernetes.io/projected/b2d70641-12a7-4923-8fa0-f09a91915630-kube-api-access-gxxcp\") pod \"nmstate-metrics-54757c584b-nh758\" (UID: \"b2d70641-12a7-4923-8fa0-f09a91915630\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-nh758" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.192152 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb6nx\" (UniqueName: \"kubernetes.io/projected/9ffa32f9-021a-405b-920e-5fb684f8d8e4-kube-api-access-pb6nx\") pod \"nmstate-console-plugin-7754f76f8b-f7qjj\" (UID: \"9ffa32f9-021a-405b-920e-5fb684f8d8e4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-f7qjj" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.192188 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b2e020f6-4ac4-407d-9eb9-96f1072d01ab-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-56j7m\" (UID: \"b2e020f6-4ac4-407d-9eb9-96f1072d01ab\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-56j7m" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.192220 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qlj6\" (UniqueName: \"kubernetes.io/projected/b8830012-0a44-4256-b546-b00b81d136cf-kube-api-access-8qlj6\") pod \"nmstate-handler-9k8lb\" (UID: \"b8830012-0a44-4256-b546-b00b81d136cf\") " pod="openshift-nmstate/nmstate-handler-9k8lb" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.192239 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b8830012-0a44-4256-b546-b00b81d136cf-nmstate-lock\") pod \"nmstate-handler-9k8lb\" (UID: \"b8830012-0a44-4256-b546-b00b81d136cf\") " pod="openshift-nmstate/nmstate-handler-9k8lb" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.192260 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ffa32f9-021a-405b-920e-5fb684f8d8e4-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-f7qjj\" (UID: \"9ffa32f9-021a-405b-920e-5fb684f8d8e4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-f7qjj" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.192276 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xncrb\" (UniqueName: \"kubernetes.io/projected/b2e020f6-4ac4-407d-9eb9-96f1072d01ab-kube-api-access-xncrb\") pod \"nmstate-webhook-8474b5b9d8-56j7m\" (UID: \"b2e020f6-4ac4-407d-9eb9-96f1072d01ab\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-56j7m" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.192310 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b8830012-0a44-4256-b546-b00b81d136cf-ovs-socket\") pod \"nmstate-handler-9k8lb\" (UID: \"b8830012-0a44-4256-b546-b00b81d136cf\") " pod="openshift-nmstate/nmstate-handler-9k8lb" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.192329 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b8830012-0a44-4256-b546-b00b81d136cf-dbus-socket\") pod \"nmstate-handler-9k8lb\" (UID: \"b8830012-0a44-4256-b546-b00b81d136cf\") " pod="openshift-nmstate/nmstate-handler-9k8lb" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.192482 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b8830012-0a44-4256-b546-b00b81d136cf-nmstate-lock\") pod \"nmstate-handler-9k8lb\" (UID: \"b8830012-0a44-4256-b546-b00b81d136cf\") " pod="openshift-nmstate/nmstate-handler-9k8lb" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.192530 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b8830012-0a44-4256-b546-b00b81d136cf-dbus-socket\") pod \"nmstate-handler-9k8lb\" (UID: \"b8830012-0a44-4256-b546-b00b81d136cf\") " pod="openshift-nmstate/nmstate-handler-9k8lb" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.192573 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b8830012-0a44-4256-b546-b00b81d136cf-ovs-socket\") pod \"nmstate-handler-9k8lb\" (UID: \"b8830012-0a44-4256-b546-b00b81d136cf\") " pod="openshift-nmstate/nmstate-handler-9k8lb" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.202228 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b2e020f6-4ac4-407d-9eb9-96f1072d01ab-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-56j7m\" (UID: \"b2e020f6-4ac4-407d-9eb9-96f1072d01ab\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-56j7m" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.210722 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xncrb\" (UniqueName: \"kubernetes.io/projected/b2e020f6-4ac4-407d-9eb9-96f1072d01ab-kube-api-access-xncrb\") pod \"nmstate-webhook-8474b5b9d8-56j7m\" (UID: \"b2e020f6-4ac4-407d-9eb9-96f1072d01ab\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-56j7m" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.214447 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxxcp\" (UniqueName: \"kubernetes.io/projected/b2d70641-12a7-4923-8fa0-f09a91915630-kube-api-access-gxxcp\") pod \"nmstate-metrics-54757c584b-nh758\" (UID: \"b2d70641-12a7-4923-8fa0-f09a91915630\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-nh758" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.218018 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qlj6\" (UniqueName: \"kubernetes.io/projected/b8830012-0a44-4256-b546-b00b81d136cf-kube-api-access-8qlj6\") pod \"nmstate-handler-9k8lb\" (UID: \"b8830012-0a44-4256-b546-b00b81d136cf\") " pod="openshift-nmstate/nmstate-handler-9k8lb" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.293387 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb6nx\" (UniqueName: \"kubernetes.io/projected/9ffa32f9-021a-405b-920e-5fb684f8d8e4-kube-api-access-pb6nx\") pod \"nmstate-console-plugin-7754f76f8b-f7qjj\" (UID: \"9ffa32f9-021a-405b-920e-5fb684f8d8e4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-f7qjj" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.293662 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ffa32f9-021a-405b-920e-5fb684f8d8e4-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-f7qjj\" (UID: \"9ffa32f9-021a-405b-920e-5fb684f8d8e4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-f7qjj" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.293744 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9ffa32f9-021a-405b-920e-5fb684f8d8e4-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-f7qjj\" (UID: \"9ffa32f9-021a-405b-920e-5fb684f8d8e4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-f7qjj" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.294608 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9ffa32f9-021a-405b-920e-5fb684f8d8e4-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-f7qjj\" (UID: \"9ffa32f9-021a-405b-920e-5fb684f8d8e4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-f7qjj" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.297789 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ffa32f9-021a-405b-920e-5fb684f8d8e4-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-f7qjj\" (UID: \"9ffa32f9-021a-405b-920e-5fb684f8d8e4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-f7qjj" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.304577 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-nh758" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.329330 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb6nx\" (UniqueName: \"kubernetes.io/projected/9ffa32f9-021a-405b-920e-5fb684f8d8e4-kube-api-access-pb6nx\") pod \"nmstate-console-plugin-7754f76f8b-f7qjj\" (UID: \"9ffa32f9-021a-405b-920e-5fb684f8d8e4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-f7qjj" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.331747 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-56j7m" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.366883 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-846444fff7-zbnps"] Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.367653 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-846444fff7-zbnps" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.368991 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-9k8lb" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.395076 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/18c7c95e-cd88-423e-b432-fba17b8ab4fa-oauth-serving-cert\") pod \"console-846444fff7-zbnps\" (UID: \"18c7c95e-cd88-423e-b432-fba17b8ab4fa\") " pod="openshift-console/console-846444fff7-zbnps" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.395305 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/18c7c95e-cd88-423e-b432-fba17b8ab4fa-service-ca\") pod \"console-846444fff7-zbnps\" (UID: \"18c7c95e-cd88-423e-b432-fba17b8ab4fa\") " pod="openshift-console/console-846444fff7-zbnps" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.395400 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18c7c95e-cd88-423e-b432-fba17b8ab4fa-trusted-ca-bundle\") pod \"console-846444fff7-zbnps\" (UID: \"18c7c95e-cd88-423e-b432-fba17b8ab4fa\") " pod="openshift-console/console-846444fff7-zbnps" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.395493 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/18c7c95e-cd88-423e-b432-fba17b8ab4fa-console-config\") pod \"console-846444fff7-zbnps\" (UID: \"18c7c95e-cd88-423e-b432-fba17b8ab4fa\") " pod="openshift-console/console-846444fff7-zbnps" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.395654 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/18c7c95e-cd88-423e-b432-fba17b8ab4fa-console-serving-cert\") pod \"console-846444fff7-zbnps\" (UID: \"18c7c95e-cd88-423e-b432-fba17b8ab4fa\") " pod="openshift-console/console-846444fff7-zbnps" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.395790 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/18c7c95e-cd88-423e-b432-fba17b8ab4fa-console-oauth-config\") pod \"console-846444fff7-zbnps\" (UID: \"18c7c95e-cd88-423e-b432-fba17b8ab4fa\") " pod="openshift-console/console-846444fff7-zbnps" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.395902 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s97fj\" (UniqueName: \"kubernetes.io/projected/18c7c95e-cd88-423e-b432-fba17b8ab4fa-kube-api-access-s97fj\") pod \"console-846444fff7-zbnps\" (UID: \"18c7c95e-cd88-423e-b432-fba17b8ab4fa\") " pod="openshift-console/console-846444fff7-zbnps" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.401545 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-846444fff7-zbnps"] Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.458773 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-f7qjj" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.498898 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/18c7c95e-cd88-423e-b432-fba17b8ab4fa-console-serving-cert\") pod \"console-846444fff7-zbnps\" (UID: \"18c7c95e-cd88-423e-b432-fba17b8ab4fa\") " pod="openshift-console/console-846444fff7-zbnps" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.499122 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/18c7c95e-cd88-423e-b432-fba17b8ab4fa-console-oauth-config\") pod \"console-846444fff7-zbnps\" (UID: \"18c7c95e-cd88-423e-b432-fba17b8ab4fa\") " pod="openshift-console/console-846444fff7-zbnps" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.499226 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s97fj\" (UniqueName: \"kubernetes.io/projected/18c7c95e-cd88-423e-b432-fba17b8ab4fa-kube-api-access-s97fj\") pod \"console-846444fff7-zbnps\" (UID: \"18c7c95e-cd88-423e-b432-fba17b8ab4fa\") " pod="openshift-console/console-846444fff7-zbnps" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.499320 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/18c7c95e-cd88-423e-b432-fba17b8ab4fa-oauth-serving-cert\") pod \"console-846444fff7-zbnps\" (UID: \"18c7c95e-cd88-423e-b432-fba17b8ab4fa\") " pod="openshift-console/console-846444fff7-zbnps" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.499425 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/18c7c95e-cd88-423e-b432-fba17b8ab4fa-service-ca\") pod \"console-846444fff7-zbnps\" (UID: \"18c7c95e-cd88-423e-b432-fba17b8ab4fa\") " pod="openshift-console/console-846444fff7-zbnps" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.499525 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18c7c95e-cd88-423e-b432-fba17b8ab4fa-trusted-ca-bundle\") pod \"console-846444fff7-zbnps\" (UID: \"18c7c95e-cd88-423e-b432-fba17b8ab4fa\") " pod="openshift-console/console-846444fff7-zbnps" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.499608 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/18c7c95e-cd88-423e-b432-fba17b8ab4fa-console-config\") pod \"console-846444fff7-zbnps\" (UID: \"18c7c95e-cd88-423e-b432-fba17b8ab4fa\") " pod="openshift-console/console-846444fff7-zbnps" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.500452 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/18c7c95e-cd88-423e-b432-fba17b8ab4fa-console-config\") pod \"console-846444fff7-zbnps\" (UID: \"18c7c95e-cd88-423e-b432-fba17b8ab4fa\") " pod="openshift-console/console-846444fff7-zbnps" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.501534 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/18c7c95e-cd88-423e-b432-fba17b8ab4fa-oauth-serving-cert\") pod \"console-846444fff7-zbnps\" (UID: \"18c7c95e-cd88-423e-b432-fba17b8ab4fa\") " pod="openshift-console/console-846444fff7-zbnps" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.502202 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/18c7c95e-cd88-423e-b432-fba17b8ab4fa-service-ca\") pod \"console-846444fff7-zbnps\" (UID: \"18c7c95e-cd88-423e-b432-fba17b8ab4fa\") " pod="openshift-console/console-846444fff7-zbnps" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.503028 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18c7c95e-cd88-423e-b432-fba17b8ab4fa-trusted-ca-bundle\") pod \"console-846444fff7-zbnps\" (UID: \"18c7c95e-cd88-423e-b432-fba17b8ab4fa\") " pod="openshift-console/console-846444fff7-zbnps" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.511273 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/18c7c95e-cd88-423e-b432-fba17b8ab4fa-console-oauth-config\") pod \"console-846444fff7-zbnps\" (UID: \"18c7c95e-cd88-423e-b432-fba17b8ab4fa\") " pod="openshift-console/console-846444fff7-zbnps" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.514287 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/18c7c95e-cd88-423e-b432-fba17b8ab4fa-console-serving-cert\") pod \"console-846444fff7-zbnps\" (UID: \"18c7c95e-cd88-423e-b432-fba17b8ab4fa\") " pod="openshift-console/console-846444fff7-zbnps" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.530459 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s97fj\" (UniqueName: \"kubernetes.io/projected/18c7c95e-cd88-423e-b432-fba17b8ab4fa-kube-api-access-s97fj\") pod \"console-846444fff7-zbnps\" (UID: \"18c7c95e-cd88-423e-b432-fba17b8ab4fa\") " pod="openshift-console/console-846444fff7-zbnps" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.627354 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-9k8lb" event={"ID":"b8830012-0a44-4256-b546-b00b81d136cf","Type":"ContainerStarted","Data":"3555db17f252de9d38abf9eba7daf0f2f7fbef5d787157c3eae2ca42ad002d27"} Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.692336 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-846444fff7-zbnps" Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.747227 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-f7qjj"] Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.881005 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-846444fff7-zbnps"] Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.922656 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-nh758"] Jan 26 11:06:41 crc kubenswrapper[4619]: I0126 11:06:41.924435 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-56j7m"] Jan 26 11:06:41 crc kubenswrapper[4619]: W0126 11:06:41.930546 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2e020f6_4ac4_407d_9eb9_96f1072d01ab.slice/crio-98fd2239758c601a2b9dcae3dc3ee534a39701e26f178bddbfdd80498d2f10b5 WatchSource:0}: Error finding container 98fd2239758c601a2b9dcae3dc3ee534a39701e26f178bddbfdd80498d2f10b5: Status 404 returned error can't find the container with id 98fd2239758c601a2b9dcae3dc3ee534a39701e26f178bddbfdd80498d2f10b5 Jan 26 11:06:42 crc kubenswrapper[4619]: I0126 11:06:42.635724 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-56j7m" event={"ID":"b2e020f6-4ac4-407d-9eb9-96f1072d01ab","Type":"ContainerStarted","Data":"98fd2239758c601a2b9dcae3dc3ee534a39701e26f178bddbfdd80498d2f10b5"} Jan 26 11:06:42 crc kubenswrapper[4619]: I0126 11:06:42.637687 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-846444fff7-zbnps" event={"ID":"18c7c95e-cd88-423e-b432-fba17b8ab4fa","Type":"ContainerStarted","Data":"7af89a769009febe85fd75b301ca2f3f485152f98f5e1a6770e7b3478fc31594"} Jan 26 11:06:42 crc kubenswrapper[4619]: I0126 11:06:42.637727 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-846444fff7-zbnps" event={"ID":"18c7c95e-cd88-423e-b432-fba17b8ab4fa","Type":"ContainerStarted","Data":"fdaf53524351b9d2f8a63fd162ccf599a7b87b6da52d8c8578f0c213b225a906"} Jan 26 11:06:42 crc kubenswrapper[4619]: I0126 11:06:42.639062 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-nh758" event={"ID":"b2d70641-12a7-4923-8fa0-f09a91915630","Type":"ContainerStarted","Data":"0545ce8076f2a3397d7e7c351fb53ee4d0aec37cd4517e5be695becf22fc0514"} Jan 26 11:06:42 crc kubenswrapper[4619]: I0126 11:06:42.640339 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-f7qjj" event={"ID":"9ffa32f9-021a-405b-920e-5fb684f8d8e4","Type":"ContainerStarted","Data":"30f3fc7e6e2e366cd032be1fee30f244f3ffbe4a3394d41b7ace11ed97b396f2"} Jan 26 11:06:42 crc kubenswrapper[4619]: I0126 11:06:42.656949 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-846444fff7-zbnps" podStartSLOduration=1.6569314309999998 podStartE2EDuration="1.656931431s" podCreationTimestamp="2026-01-26 11:06:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:06:42.656837938 +0000 UTC m=+701.690878674" watchObservedRunningTime="2026-01-26 11:06:42.656931431 +0000 UTC m=+701.690972147" Jan 26 11:06:47 crc kubenswrapper[4619]: I0126 11:06:47.679754 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-f7qjj" event={"ID":"9ffa32f9-021a-405b-920e-5fb684f8d8e4","Type":"ContainerStarted","Data":"9b50f0370a97cb7ea9929ab015b1cd15279488970a66b956456bef7c11c9a263"} Jan 26 11:06:47 crc kubenswrapper[4619]: I0126 11:06:47.685124 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-56j7m" event={"ID":"b2e020f6-4ac4-407d-9eb9-96f1072d01ab","Type":"ContainerStarted","Data":"1af1630327eabfc3ecd4a8afd59a3d4e3d3b2553a77ffc92e6453e682301449a"} Jan 26 11:06:47 crc kubenswrapper[4619]: I0126 11:06:47.685432 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-56j7m" Jan 26 11:06:47 crc kubenswrapper[4619]: I0126 11:06:47.688248 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-9k8lb" event={"ID":"b8830012-0a44-4256-b546-b00b81d136cf","Type":"ContainerStarted","Data":"ba5f4e19e2aa0a2e78760647382445de78f970e66455478579930faf4b9ca31e"} Jan 26 11:06:47 crc kubenswrapper[4619]: I0126 11:06:47.688389 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-9k8lb" Jan 26 11:06:47 crc kubenswrapper[4619]: I0126 11:06:47.693379 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-nh758" event={"ID":"b2d70641-12a7-4923-8fa0-f09a91915630","Type":"ContainerStarted","Data":"2e8a5b2895bbfa1e0f2f2e0f5d7df08efda66ab2fbe6277008a9dc19ecac89c4"} Jan 26 11:06:47 crc kubenswrapper[4619]: I0126 11:06:47.728537 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-f7qjj" podStartSLOduration=1.964318294 podStartE2EDuration="6.72851169s" podCreationTimestamp="2026-01-26 11:06:41 +0000 UTC" firstStartedPulling="2026-01-26 11:06:41.765274988 +0000 UTC m=+700.799315704" lastFinishedPulling="2026-01-26 11:06:46.529468384 +0000 UTC m=+705.563509100" observedRunningTime="2026-01-26 11:06:47.70499164 +0000 UTC m=+706.739032376" watchObservedRunningTime="2026-01-26 11:06:47.72851169 +0000 UTC m=+706.762552446" Jan 26 11:06:47 crc kubenswrapper[4619]: I0126 11:06:47.744648 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-9k8lb" podStartSLOduration=1.636310594 podStartE2EDuration="6.744596638s" podCreationTimestamp="2026-01-26 11:06:41 +0000 UTC" firstStartedPulling="2026-01-26 11:06:41.446013576 +0000 UTC m=+700.480054292" lastFinishedPulling="2026-01-26 11:06:46.55429961 +0000 UTC m=+705.588340336" observedRunningTime="2026-01-26 11:06:47.744099144 +0000 UTC m=+706.778139870" watchObservedRunningTime="2026-01-26 11:06:47.744596638 +0000 UTC m=+706.778637364" Jan 26 11:06:47 crc kubenswrapper[4619]: I0126 11:06:47.745064 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-56j7m" podStartSLOduration=3.150932919 podStartE2EDuration="7.745056992s" podCreationTimestamp="2026-01-26 11:06:40 +0000 UTC" firstStartedPulling="2026-01-26 11:06:41.934430875 +0000 UTC m=+700.968471591" lastFinishedPulling="2026-01-26 11:06:46.528554938 +0000 UTC m=+705.562595664" observedRunningTime="2026-01-26 11:06:47.729031155 +0000 UTC m=+706.763071881" watchObservedRunningTime="2026-01-26 11:06:47.745056992 +0000 UTC m=+706.779097718" Jan 26 11:06:49 crc kubenswrapper[4619]: I0126 11:06:49.721999 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-nh758" event={"ID":"b2d70641-12a7-4923-8fa0-f09a91915630","Type":"ContainerStarted","Data":"eb8d728d9adf763ee534459deb9884dc34664433efe07a2cdc80a1b57450a5ac"} Jan 26 11:06:49 crc kubenswrapper[4619]: I0126 11:06:49.742176 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-nh758" podStartSLOduration=2.365146981 podStartE2EDuration="9.742144905s" podCreationTimestamp="2026-01-26 11:06:40 +0000 UTC" firstStartedPulling="2026-01-26 11:06:41.930229816 +0000 UTC m=+700.964270532" lastFinishedPulling="2026-01-26 11:06:49.30722773 +0000 UTC m=+708.341268456" observedRunningTime="2026-01-26 11:06:49.736815743 +0000 UTC m=+708.770856489" watchObservedRunningTime="2026-01-26 11:06:49.742144905 +0000 UTC m=+708.776185661" Jan 26 11:06:51 crc kubenswrapper[4619]: I0126 11:06:51.563413 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-9k8lb" Jan 26 11:06:51 crc kubenswrapper[4619]: I0126 11:06:51.692536 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-846444fff7-zbnps" Jan 26 11:06:51 crc kubenswrapper[4619]: I0126 11:06:51.693157 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-846444fff7-zbnps" Jan 26 11:06:51 crc kubenswrapper[4619]: I0126 11:06:51.700437 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-846444fff7-zbnps" Jan 26 11:06:51 crc kubenswrapper[4619]: I0126 11:06:51.740769 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-846444fff7-zbnps" Jan 26 11:06:51 crc kubenswrapper[4619]: I0126 11:06:51.811471 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-zdjpz"] Jan 26 11:07:01 crc kubenswrapper[4619]: I0126 11:07:01.338926 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-56j7m" Jan 26 11:07:14 crc kubenswrapper[4619]: I0126 11:07:14.234666 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:07:14 crc kubenswrapper[4619]: I0126 11:07:14.235139 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:07:15 crc kubenswrapper[4619]: I0126 11:07:15.559145 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5"] Jan 26 11:07:15 crc kubenswrapper[4619]: I0126 11:07:15.560416 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5" Jan 26 11:07:15 crc kubenswrapper[4619]: I0126 11:07:15.566882 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 11:07:15 crc kubenswrapper[4619]: I0126 11:07:15.576344 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5"] Jan 26 11:07:15 crc kubenswrapper[4619]: I0126 11:07:15.616061 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt4c6\" (UniqueName: \"kubernetes.io/projected/f84b7e47-460a-490b-b407-ab46935b44ea-kube-api-access-vt4c6\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5\" (UID: \"f84b7e47-460a-490b-b407-ab46935b44ea\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5" Jan 26 11:07:15 crc kubenswrapper[4619]: I0126 11:07:15.616143 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f84b7e47-460a-490b-b407-ab46935b44ea-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5\" (UID: \"f84b7e47-460a-490b-b407-ab46935b44ea\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5" Jan 26 11:07:15 crc kubenswrapper[4619]: I0126 11:07:15.616170 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f84b7e47-460a-490b-b407-ab46935b44ea-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5\" (UID: \"f84b7e47-460a-490b-b407-ab46935b44ea\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5" Jan 26 11:07:15 crc kubenswrapper[4619]: I0126 11:07:15.717850 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f84b7e47-460a-490b-b407-ab46935b44ea-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5\" (UID: \"f84b7e47-460a-490b-b407-ab46935b44ea\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5" Jan 26 11:07:15 crc kubenswrapper[4619]: I0126 11:07:15.717954 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt4c6\" (UniqueName: \"kubernetes.io/projected/f84b7e47-460a-490b-b407-ab46935b44ea-kube-api-access-vt4c6\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5\" (UID: \"f84b7e47-460a-490b-b407-ab46935b44ea\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5" Jan 26 11:07:15 crc kubenswrapper[4619]: I0126 11:07:15.718013 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f84b7e47-460a-490b-b407-ab46935b44ea-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5\" (UID: \"f84b7e47-460a-490b-b407-ab46935b44ea\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5" Jan 26 11:07:15 crc kubenswrapper[4619]: I0126 11:07:15.718558 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f84b7e47-460a-490b-b407-ab46935b44ea-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5\" (UID: \"f84b7e47-460a-490b-b407-ab46935b44ea\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5" Jan 26 11:07:15 crc kubenswrapper[4619]: I0126 11:07:15.718777 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f84b7e47-460a-490b-b407-ab46935b44ea-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5\" (UID: \"f84b7e47-460a-490b-b407-ab46935b44ea\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5" Jan 26 11:07:15 crc kubenswrapper[4619]: I0126 11:07:15.747303 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt4c6\" (UniqueName: \"kubernetes.io/projected/f84b7e47-460a-490b-b407-ab46935b44ea-kube-api-access-vt4c6\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5\" (UID: \"f84b7e47-460a-490b-b407-ab46935b44ea\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5" Jan 26 11:07:15 crc kubenswrapper[4619]: I0126 11:07:15.875092 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5" Jan 26 11:07:16 crc kubenswrapper[4619]: I0126 11:07:16.081889 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5"] Jan 26 11:07:16 crc kubenswrapper[4619]: I0126 11:07:16.853560 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-zdjpz" podUID="3b59d0dd-baeb-4a81-989b-7ee68bfa06aa" containerName="console" containerID="cri-o://3840790430e0acf2a0467d52fd8c2c4799b4d866077317a3d6c45782be1a6a55" gracePeriod=15 Jan 26 11:07:16 crc kubenswrapper[4619]: I0126 11:07:16.925204 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5" event={"ID":"f84b7e47-460a-490b-b407-ab46935b44ea","Type":"ContainerStarted","Data":"7aafd04e50d3f5dc03113323d72a6bfa5e9be95562684df52140945b679c1d49"} Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.719181 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-zdjpz_3b59d0dd-baeb-4a81-989b-7ee68bfa06aa/console/0.log" Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.719256 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zdjpz" Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.742775 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-console-serving-cert\") pod \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\" (UID: \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\") " Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.742832 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-console-oauth-config\") pod \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\" (UID: \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\") " Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.742850 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-console-config\") pod \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\" (UID: \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\") " Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.742874 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-service-ca\") pod \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\" (UID: \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\") " Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.742895 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-oauth-serving-cert\") pod \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\" (UID: \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\") " Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.742918 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-trusted-ca-bundle\") pod \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\" (UID: \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\") " Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.742975 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqptw\" (UniqueName: \"kubernetes.io/projected/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-kube-api-access-wqptw\") pod \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\" (UID: \"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa\") " Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.743722 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-service-ca" (OuterVolumeSpecName: "service-ca") pod "3b59d0dd-baeb-4a81-989b-7ee68bfa06aa" (UID: "3b59d0dd-baeb-4a81-989b-7ee68bfa06aa"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.743875 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-console-config" (OuterVolumeSpecName: "console-config") pod "3b59d0dd-baeb-4a81-989b-7ee68bfa06aa" (UID: "3b59d0dd-baeb-4a81-989b-7ee68bfa06aa"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.743968 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "3b59d0dd-baeb-4a81-989b-7ee68bfa06aa" (UID: "3b59d0dd-baeb-4a81-989b-7ee68bfa06aa"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.744018 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "3b59d0dd-baeb-4a81-989b-7ee68bfa06aa" (UID: "3b59d0dd-baeb-4a81-989b-7ee68bfa06aa"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.749907 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "3b59d0dd-baeb-4a81-989b-7ee68bfa06aa" (UID: "3b59d0dd-baeb-4a81-989b-7ee68bfa06aa"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.749980 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-kube-api-access-wqptw" (OuterVolumeSpecName: "kube-api-access-wqptw") pod "3b59d0dd-baeb-4a81-989b-7ee68bfa06aa" (UID: "3b59d0dd-baeb-4a81-989b-7ee68bfa06aa"). InnerVolumeSpecName "kube-api-access-wqptw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.750190 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "3b59d0dd-baeb-4a81-989b-7ee68bfa06aa" (UID: "3b59d0dd-baeb-4a81-989b-7ee68bfa06aa"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.843685 4619 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.843725 4619 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.843736 4619 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.843745 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqptw\" (UniqueName: \"kubernetes.io/projected/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-kube-api-access-wqptw\") on node \"crc\" DevicePath \"\"" Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.843753 4619 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.843761 4619 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.843768 4619 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.931848 4619 generic.go:334] "Generic (PLEG): container finished" podID="f84b7e47-460a-490b-b407-ab46935b44ea" containerID="6a237afb05ac0855cd9543212dce37d71e1ea16431360d873876d24123dd7673" exitCode=0 Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.931919 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5" event={"ID":"f84b7e47-460a-490b-b407-ab46935b44ea","Type":"ContainerDied","Data":"6a237afb05ac0855cd9543212dce37d71e1ea16431360d873876d24123dd7673"} Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.935656 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-zdjpz_3b59d0dd-baeb-4a81-989b-7ee68bfa06aa/console/0.log" Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.935692 4619 generic.go:334] "Generic (PLEG): container finished" podID="3b59d0dd-baeb-4a81-989b-7ee68bfa06aa" containerID="3840790430e0acf2a0467d52fd8c2c4799b4d866077317a3d6c45782be1a6a55" exitCode=2 Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.935721 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zdjpz" event={"ID":"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa","Type":"ContainerDied","Data":"3840790430e0acf2a0467d52fd8c2c4799b4d866077317a3d6c45782be1a6a55"} Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.935748 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zdjpz" event={"ID":"3b59d0dd-baeb-4a81-989b-7ee68bfa06aa","Type":"ContainerDied","Data":"e0ea170810ac0eb7bb08789570ca844617ebd641bbced771082d76ef03661f0b"} Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.935766 4619 scope.go:117] "RemoveContainer" containerID="3840790430e0acf2a0467d52fd8c2c4799b4d866077317a3d6c45782be1a6a55" Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.935879 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zdjpz" Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.970060 4619 scope.go:117] "RemoveContainer" containerID="3840790430e0acf2a0467d52fd8c2c4799b4d866077317a3d6c45782be1a6a55" Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.972636 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-zdjpz"] Jan 26 11:07:17 crc kubenswrapper[4619]: E0126 11:07:17.973195 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3840790430e0acf2a0467d52fd8c2c4799b4d866077317a3d6c45782be1a6a55\": container with ID starting with 3840790430e0acf2a0467d52fd8c2c4799b4d866077317a3d6c45782be1a6a55 not found: ID does not exist" containerID="3840790430e0acf2a0467d52fd8c2c4799b4d866077317a3d6c45782be1a6a55" Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.973229 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3840790430e0acf2a0467d52fd8c2c4799b4d866077317a3d6c45782be1a6a55"} err="failed to get container status \"3840790430e0acf2a0467d52fd8c2c4799b4d866077317a3d6c45782be1a6a55\": rpc error: code = NotFound desc = could not find container \"3840790430e0acf2a0467d52fd8c2c4799b4d866077317a3d6c45782be1a6a55\": container with ID starting with 3840790430e0acf2a0467d52fd8c2c4799b4d866077317a3d6c45782be1a6a55 not found: ID does not exist" Jan 26 11:07:17 crc kubenswrapper[4619]: I0126 11:07:17.977981 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-zdjpz"] Jan 26 11:07:19 crc kubenswrapper[4619]: I0126 11:07:19.273418 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b59d0dd-baeb-4a81-989b-7ee68bfa06aa" path="/var/lib/kubelet/pods/3b59d0dd-baeb-4a81-989b-7ee68bfa06aa/volumes" Jan 26 11:07:19 crc kubenswrapper[4619]: I0126 11:07:19.951424 4619 generic.go:334] "Generic (PLEG): container finished" podID="f84b7e47-460a-490b-b407-ab46935b44ea" containerID="2b5758dfebff9caaca327feb4e7e5caaa69c7bf246984daa5d0eee1f81c2fbeb" exitCode=0 Jan 26 11:07:19 crc kubenswrapper[4619]: I0126 11:07:19.951493 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5" event={"ID":"f84b7e47-460a-490b-b407-ab46935b44ea","Type":"ContainerDied","Data":"2b5758dfebff9caaca327feb4e7e5caaa69c7bf246984daa5d0eee1f81c2fbeb"} Jan 26 11:07:20 crc kubenswrapper[4619]: I0126 11:07:20.963557 4619 generic.go:334] "Generic (PLEG): container finished" podID="f84b7e47-460a-490b-b407-ab46935b44ea" containerID="e27f71754995b9d2448bb13290462f63c865dfe912e01e3d70fd6c28ed20e9cc" exitCode=0 Jan 26 11:07:20 crc kubenswrapper[4619]: I0126 11:07:20.963770 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5" event={"ID":"f84b7e47-460a-490b-b407-ab46935b44ea","Type":"ContainerDied","Data":"e27f71754995b9d2448bb13290462f63c865dfe912e01e3d70fd6c28ed20e9cc"} Jan 26 11:07:22 crc kubenswrapper[4619]: I0126 11:07:22.194664 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5" Jan 26 11:07:22 crc kubenswrapper[4619]: I0126 11:07:22.304997 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f84b7e47-460a-490b-b407-ab46935b44ea-util\") pod \"f84b7e47-460a-490b-b407-ab46935b44ea\" (UID: \"f84b7e47-460a-490b-b407-ab46935b44ea\") " Jan 26 11:07:22 crc kubenswrapper[4619]: I0126 11:07:22.305040 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f84b7e47-460a-490b-b407-ab46935b44ea-bundle\") pod \"f84b7e47-460a-490b-b407-ab46935b44ea\" (UID: \"f84b7e47-460a-490b-b407-ab46935b44ea\") " Jan 26 11:07:22 crc kubenswrapper[4619]: I0126 11:07:22.305104 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt4c6\" (UniqueName: \"kubernetes.io/projected/f84b7e47-460a-490b-b407-ab46935b44ea-kube-api-access-vt4c6\") pod \"f84b7e47-460a-490b-b407-ab46935b44ea\" (UID: \"f84b7e47-460a-490b-b407-ab46935b44ea\") " Jan 26 11:07:22 crc kubenswrapper[4619]: I0126 11:07:22.306682 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f84b7e47-460a-490b-b407-ab46935b44ea-bundle" (OuterVolumeSpecName: "bundle") pod "f84b7e47-460a-490b-b407-ab46935b44ea" (UID: "f84b7e47-460a-490b-b407-ab46935b44ea"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:07:22 crc kubenswrapper[4619]: I0126 11:07:22.311246 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f84b7e47-460a-490b-b407-ab46935b44ea-kube-api-access-vt4c6" (OuterVolumeSpecName: "kube-api-access-vt4c6") pod "f84b7e47-460a-490b-b407-ab46935b44ea" (UID: "f84b7e47-460a-490b-b407-ab46935b44ea"). InnerVolumeSpecName "kube-api-access-vt4c6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:07:22 crc kubenswrapper[4619]: I0126 11:07:22.331465 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f84b7e47-460a-490b-b407-ab46935b44ea-util" (OuterVolumeSpecName: "util") pod "f84b7e47-460a-490b-b407-ab46935b44ea" (UID: "f84b7e47-460a-490b-b407-ab46935b44ea"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:07:22 crc kubenswrapper[4619]: I0126 11:07:22.407096 4619 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f84b7e47-460a-490b-b407-ab46935b44ea-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:07:22 crc kubenswrapper[4619]: I0126 11:07:22.407123 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt4c6\" (UniqueName: \"kubernetes.io/projected/f84b7e47-460a-490b-b407-ab46935b44ea-kube-api-access-vt4c6\") on node \"crc\" DevicePath \"\"" Jan 26 11:07:22 crc kubenswrapper[4619]: I0126 11:07:22.407362 4619 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f84b7e47-460a-490b-b407-ab46935b44ea-util\") on node \"crc\" DevicePath \"\"" Jan 26 11:07:22 crc kubenswrapper[4619]: I0126 11:07:22.981747 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5" event={"ID":"f84b7e47-460a-490b-b407-ab46935b44ea","Type":"ContainerDied","Data":"7aafd04e50d3f5dc03113323d72a6bfa5e9be95562684df52140945b679c1d49"} Jan 26 11:07:22 crc kubenswrapper[4619]: I0126 11:07:22.982177 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7aafd04e50d3f5dc03113323d72a6bfa5e9be95562684df52140945b679c1d49" Jan 26 11:07:22 crc kubenswrapper[4619]: I0126 11:07:22.981871 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5" Jan 26 11:07:34 crc kubenswrapper[4619]: I0126 11:07:34.978869 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-f5d78644f-q9qtr"] Jan 26 11:07:34 crc kubenswrapper[4619]: E0126 11:07:34.979702 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f84b7e47-460a-490b-b407-ab46935b44ea" containerName="util" Jan 26 11:07:34 crc kubenswrapper[4619]: I0126 11:07:34.979716 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="f84b7e47-460a-490b-b407-ab46935b44ea" containerName="util" Jan 26 11:07:34 crc kubenswrapper[4619]: E0126 11:07:34.979734 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f84b7e47-460a-490b-b407-ab46935b44ea" containerName="pull" Jan 26 11:07:34 crc kubenswrapper[4619]: I0126 11:07:34.979742 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="f84b7e47-460a-490b-b407-ab46935b44ea" containerName="pull" Jan 26 11:07:34 crc kubenswrapper[4619]: E0126 11:07:34.979752 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f84b7e47-460a-490b-b407-ab46935b44ea" containerName="extract" Jan 26 11:07:34 crc kubenswrapper[4619]: I0126 11:07:34.979759 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="f84b7e47-460a-490b-b407-ab46935b44ea" containerName="extract" Jan 26 11:07:34 crc kubenswrapper[4619]: E0126 11:07:34.979771 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b59d0dd-baeb-4a81-989b-7ee68bfa06aa" containerName="console" Jan 26 11:07:34 crc kubenswrapper[4619]: I0126 11:07:34.979778 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b59d0dd-baeb-4a81-989b-7ee68bfa06aa" containerName="console" Jan 26 11:07:34 crc kubenswrapper[4619]: I0126 11:07:34.979894 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="f84b7e47-460a-490b-b407-ab46935b44ea" containerName="extract" Jan 26 11:07:34 crc kubenswrapper[4619]: I0126 11:07:34.979908 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b59d0dd-baeb-4a81-989b-7ee68bfa06aa" containerName="console" Jan 26 11:07:34 crc kubenswrapper[4619]: I0126 11:07:34.980356 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-f5d78644f-q9qtr" Jan 26 11:07:34 crc kubenswrapper[4619]: I0126 11:07:34.986964 4619 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 26 11:07:34 crc kubenswrapper[4619]: I0126 11:07:34.987028 4619 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 26 11:07:34 crc kubenswrapper[4619]: I0126 11:07:34.995746 4619 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-zn2bx" Jan 26 11:07:34 crc kubenswrapper[4619]: I0126 11:07:34.996911 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 26 11:07:34 crc kubenswrapper[4619]: I0126 11:07:34.997082 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 26 11:07:35 crc kubenswrapper[4619]: I0126 11:07:35.009322 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-f5d78644f-q9qtr"] Jan 26 11:07:35 crc kubenswrapper[4619]: I0126 11:07:35.084522 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr26b\" (UniqueName: \"kubernetes.io/projected/4c81b5cf-0f17-4d7b-bfd8-ee67be620339-kube-api-access-tr26b\") pod \"metallb-operator-controller-manager-f5d78644f-q9qtr\" (UID: \"4c81b5cf-0f17-4d7b-bfd8-ee67be620339\") " pod="metallb-system/metallb-operator-controller-manager-f5d78644f-q9qtr" Jan 26 11:07:35 crc kubenswrapper[4619]: I0126 11:07:35.084654 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4c81b5cf-0f17-4d7b-bfd8-ee67be620339-apiservice-cert\") pod \"metallb-operator-controller-manager-f5d78644f-q9qtr\" (UID: \"4c81b5cf-0f17-4d7b-bfd8-ee67be620339\") " pod="metallb-system/metallb-operator-controller-manager-f5d78644f-q9qtr" Jan 26 11:07:35 crc kubenswrapper[4619]: I0126 11:07:35.084911 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4c81b5cf-0f17-4d7b-bfd8-ee67be620339-webhook-cert\") pod \"metallb-operator-controller-manager-f5d78644f-q9qtr\" (UID: \"4c81b5cf-0f17-4d7b-bfd8-ee67be620339\") " pod="metallb-system/metallb-operator-controller-manager-f5d78644f-q9qtr" Jan 26 11:07:35 crc kubenswrapper[4619]: I0126 11:07:35.185918 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4c81b5cf-0f17-4d7b-bfd8-ee67be620339-webhook-cert\") pod \"metallb-operator-controller-manager-f5d78644f-q9qtr\" (UID: \"4c81b5cf-0f17-4d7b-bfd8-ee67be620339\") " pod="metallb-system/metallb-operator-controller-manager-f5d78644f-q9qtr" Jan 26 11:07:35 crc kubenswrapper[4619]: I0126 11:07:35.186012 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr26b\" (UniqueName: \"kubernetes.io/projected/4c81b5cf-0f17-4d7b-bfd8-ee67be620339-kube-api-access-tr26b\") pod \"metallb-operator-controller-manager-f5d78644f-q9qtr\" (UID: \"4c81b5cf-0f17-4d7b-bfd8-ee67be620339\") " pod="metallb-system/metallb-operator-controller-manager-f5d78644f-q9qtr" Jan 26 11:07:35 crc kubenswrapper[4619]: I0126 11:07:35.186071 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4c81b5cf-0f17-4d7b-bfd8-ee67be620339-apiservice-cert\") pod \"metallb-operator-controller-manager-f5d78644f-q9qtr\" (UID: \"4c81b5cf-0f17-4d7b-bfd8-ee67be620339\") " pod="metallb-system/metallb-operator-controller-manager-f5d78644f-q9qtr" Jan 26 11:07:35 crc kubenswrapper[4619]: I0126 11:07:35.200503 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4c81b5cf-0f17-4d7b-bfd8-ee67be620339-webhook-cert\") pod \"metallb-operator-controller-manager-f5d78644f-q9qtr\" (UID: \"4c81b5cf-0f17-4d7b-bfd8-ee67be620339\") " pod="metallb-system/metallb-operator-controller-manager-f5d78644f-q9qtr" Jan 26 11:07:35 crc kubenswrapper[4619]: I0126 11:07:35.204394 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr26b\" (UniqueName: \"kubernetes.io/projected/4c81b5cf-0f17-4d7b-bfd8-ee67be620339-kube-api-access-tr26b\") pod \"metallb-operator-controller-manager-f5d78644f-q9qtr\" (UID: \"4c81b5cf-0f17-4d7b-bfd8-ee67be620339\") " pod="metallb-system/metallb-operator-controller-manager-f5d78644f-q9qtr" Jan 26 11:07:35 crc kubenswrapper[4619]: I0126 11:07:35.207487 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4c81b5cf-0f17-4d7b-bfd8-ee67be620339-apiservice-cert\") pod \"metallb-operator-controller-manager-f5d78644f-q9qtr\" (UID: \"4c81b5cf-0f17-4d7b-bfd8-ee67be620339\") " pod="metallb-system/metallb-operator-controller-manager-f5d78644f-q9qtr" Jan 26 11:07:35 crc kubenswrapper[4619]: I0126 11:07:35.301318 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-f5d78644f-q9qtr" Jan 26 11:07:35 crc kubenswrapper[4619]: I0126 11:07:35.312561 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7575bfd756-qfvd5"] Jan 26 11:07:35 crc kubenswrapper[4619]: I0126 11:07:35.313229 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7575bfd756-qfvd5" Jan 26 11:07:35 crc kubenswrapper[4619]: I0126 11:07:35.320858 4619 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 26 11:07:35 crc kubenswrapper[4619]: I0126 11:07:35.320957 4619 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-4blwc" Jan 26 11:07:35 crc kubenswrapper[4619]: I0126 11:07:35.321325 4619 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 11:07:35 crc kubenswrapper[4619]: I0126 11:07:35.349595 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7575bfd756-qfvd5"] Jan 26 11:07:35 crc kubenswrapper[4619]: I0126 11:07:35.402780 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/50eeef8d-b0ff-4b67-86ef-68febf4bcc0b-webhook-cert\") pod \"metallb-operator-webhook-server-7575bfd756-qfvd5\" (UID: \"50eeef8d-b0ff-4b67-86ef-68febf4bcc0b\") " pod="metallb-system/metallb-operator-webhook-server-7575bfd756-qfvd5" Jan 26 11:07:35 crc kubenswrapper[4619]: I0126 11:07:35.402837 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/50eeef8d-b0ff-4b67-86ef-68febf4bcc0b-apiservice-cert\") pod \"metallb-operator-webhook-server-7575bfd756-qfvd5\" (UID: \"50eeef8d-b0ff-4b67-86ef-68febf4bcc0b\") " pod="metallb-system/metallb-operator-webhook-server-7575bfd756-qfvd5" Jan 26 11:07:35 crc kubenswrapper[4619]: I0126 11:07:35.402867 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7w7d\" (UniqueName: \"kubernetes.io/projected/50eeef8d-b0ff-4b67-86ef-68febf4bcc0b-kube-api-access-n7w7d\") pod \"metallb-operator-webhook-server-7575bfd756-qfvd5\" (UID: \"50eeef8d-b0ff-4b67-86ef-68febf4bcc0b\") " pod="metallb-system/metallb-operator-webhook-server-7575bfd756-qfvd5" Jan 26 11:07:35 crc kubenswrapper[4619]: I0126 11:07:35.513721 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/50eeef8d-b0ff-4b67-86ef-68febf4bcc0b-webhook-cert\") pod \"metallb-operator-webhook-server-7575bfd756-qfvd5\" (UID: \"50eeef8d-b0ff-4b67-86ef-68febf4bcc0b\") " pod="metallb-system/metallb-operator-webhook-server-7575bfd756-qfvd5" Jan 26 11:07:35 crc kubenswrapper[4619]: I0126 11:07:35.514827 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/50eeef8d-b0ff-4b67-86ef-68febf4bcc0b-apiservice-cert\") pod \"metallb-operator-webhook-server-7575bfd756-qfvd5\" (UID: \"50eeef8d-b0ff-4b67-86ef-68febf4bcc0b\") " pod="metallb-system/metallb-operator-webhook-server-7575bfd756-qfvd5" Jan 26 11:07:35 crc kubenswrapper[4619]: I0126 11:07:35.514875 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7w7d\" (UniqueName: \"kubernetes.io/projected/50eeef8d-b0ff-4b67-86ef-68febf4bcc0b-kube-api-access-n7w7d\") pod \"metallb-operator-webhook-server-7575bfd756-qfvd5\" (UID: \"50eeef8d-b0ff-4b67-86ef-68febf4bcc0b\") " pod="metallb-system/metallb-operator-webhook-server-7575bfd756-qfvd5" Jan 26 11:07:35 crc kubenswrapper[4619]: I0126 11:07:35.541478 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/50eeef8d-b0ff-4b67-86ef-68febf4bcc0b-apiservice-cert\") pod \"metallb-operator-webhook-server-7575bfd756-qfvd5\" (UID: \"50eeef8d-b0ff-4b67-86ef-68febf4bcc0b\") " pod="metallb-system/metallb-operator-webhook-server-7575bfd756-qfvd5" Jan 26 11:07:35 crc kubenswrapper[4619]: I0126 11:07:35.565686 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/50eeef8d-b0ff-4b67-86ef-68febf4bcc0b-webhook-cert\") pod \"metallb-operator-webhook-server-7575bfd756-qfvd5\" (UID: \"50eeef8d-b0ff-4b67-86ef-68febf4bcc0b\") " pod="metallb-system/metallb-operator-webhook-server-7575bfd756-qfvd5" Jan 26 11:07:35 crc kubenswrapper[4619]: I0126 11:07:35.617477 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7w7d\" (UniqueName: \"kubernetes.io/projected/50eeef8d-b0ff-4b67-86ef-68febf4bcc0b-kube-api-access-n7w7d\") pod \"metallb-operator-webhook-server-7575bfd756-qfvd5\" (UID: \"50eeef8d-b0ff-4b67-86ef-68febf4bcc0b\") " pod="metallb-system/metallb-operator-webhook-server-7575bfd756-qfvd5" Jan 26 11:07:35 crc kubenswrapper[4619]: I0126 11:07:35.722892 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7575bfd756-qfvd5" Jan 26 11:07:35 crc kubenswrapper[4619]: I0126 11:07:35.788034 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-f5d78644f-q9qtr"] Jan 26 11:07:35 crc kubenswrapper[4619]: W0126 11:07:35.813747 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c81b5cf_0f17_4d7b_bfd8_ee67be620339.slice/crio-65643cb03aa43edd41dd6860532c9d014172cc959bf952c2de9404873648f973 WatchSource:0}: Error finding container 65643cb03aa43edd41dd6860532c9d014172cc959bf952c2de9404873648f973: Status 404 returned error can't find the container with id 65643cb03aa43edd41dd6860532c9d014172cc959bf952c2de9404873648f973 Jan 26 11:07:36 crc kubenswrapper[4619]: I0126 11:07:36.052454 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-f5d78644f-q9qtr" event={"ID":"4c81b5cf-0f17-4d7b-bfd8-ee67be620339","Type":"ContainerStarted","Data":"65643cb03aa43edd41dd6860532c9d014172cc959bf952c2de9404873648f973"} Jan 26 11:07:36 crc kubenswrapper[4619]: I0126 11:07:36.197351 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7575bfd756-qfvd5"] Jan 26 11:07:36 crc kubenswrapper[4619]: W0126 11:07:36.208609 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50eeef8d_b0ff_4b67_86ef_68febf4bcc0b.slice/crio-cea6f3c46ba3bbc2704b3df67f357bb0e061ca4f82a8ebb1523ec170f891e957 WatchSource:0}: Error finding container cea6f3c46ba3bbc2704b3df67f357bb0e061ca4f82a8ebb1523ec170f891e957: Status 404 returned error can't find the container with id cea6f3c46ba3bbc2704b3df67f357bb0e061ca4f82a8ebb1523ec170f891e957 Jan 26 11:07:37 crc kubenswrapper[4619]: I0126 11:07:37.058313 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7575bfd756-qfvd5" event={"ID":"50eeef8d-b0ff-4b67-86ef-68febf4bcc0b","Type":"ContainerStarted","Data":"cea6f3c46ba3bbc2704b3df67f357bb0e061ca4f82a8ebb1523ec170f891e957"} Jan 26 11:07:39 crc kubenswrapper[4619]: I0126 11:07:39.071416 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-f5d78644f-q9qtr" event={"ID":"4c81b5cf-0f17-4d7b-bfd8-ee67be620339","Type":"ContainerStarted","Data":"1a938303440720887c015dacea4b60d6bbb92d96235d5856bae5de1f9de6ef33"} Jan 26 11:07:39 crc kubenswrapper[4619]: I0126 11:07:39.071826 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-f5d78644f-q9qtr" Jan 26 11:07:39 crc kubenswrapper[4619]: I0126 11:07:39.112270 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-f5d78644f-q9qtr" podStartSLOduration=2.142331686 podStartE2EDuration="5.112249852s" podCreationTimestamp="2026-01-26 11:07:34 +0000 UTC" firstStartedPulling="2026-01-26 11:07:35.823803206 +0000 UTC m=+754.857843922" lastFinishedPulling="2026-01-26 11:07:38.793721372 +0000 UTC m=+757.827762088" observedRunningTime="2026-01-26 11:07:39.105879184 +0000 UTC m=+758.139919920" watchObservedRunningTime="2026-01-26 11:07:39.112249852 +0000 UTC m=+758.146290588" Jan 26 11:07:41 crc kubenswrapper[4619]: I0126 11:07:41.083313 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7575bfd756-qfvd5" event={"ID":"50eeef8d-b0ff-4b67-86ef-68febf4bcc0b","Type":"ContainerStarted","Data":"aea9836779558bee5231d565a3e3e5037616559e1bb60638dfaf39a28e46e62d"} Jan 26 11:07:41 crc kubenswrapper[4619]: I0126 11:07:41.083881 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7575bfd756-qfvd5" Jan 26 11:07:41 crc kubenswrapper[4619]: I0126 11:07:41.107867 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7575bfd756-qfvd5" podStartSLOduration=1.526092566 podStartE2EDuration="6.107852564s" podCreationTimestamp="2026-01-26 11:07:35 +0000 UTC" firstStartedPulling="2026-01-26 11:07:36.212538751 +0000 UTC m=+755.246579467" lastFinishedPulling="2026-01-26 11:07:40.794298749 +0000 UTC m=+759.828339465" observedRunningTime="2026-01-26 11:07:41.102978532 +0000 UTC m=+760.137019248" watchObservedRunningTime="2026-01-26 11:07:41.107852564 +0000 UTC m=+760.141893280" Jan 26 11:07:44 crc kubenswrapper[4619]: I0126 11:07:44.234347 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:07:44 crc kubenswrapper[4619]: I0126 11:07:44.234731 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:07:46 crc kubenswrapper[4619]: I0126 11:07:46.457141 4619 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 11:07:55 crc kubenswrapper[4619]: I0126 11:07:55.728481 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7575bfd756-qfvd5" Jan 26 11:08:14 crc kubenswrapper[4619]: I0126 11:08:14.234604 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:08:14 crc kubenswrapper[4619]: I0126 11:08:14.235326 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:08:14 crc kubenswrapper[4619]: I0126 11:08:14.235380 4619 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" Jan 26 11:08:14 crc kubenswrapper[4619]: I0126 11:08:14.236119 4619 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6738b1743914672b9048dd0ac0796d6733f8691d82b90a8907b894bf9b7c51fb"} pod="openshift-machine-config-operator/machine-config-daemon-28hd4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 11:08:14 crc kubenswrapper[4619]: I0126 11:08:14.236179 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" containerID="cri-o://6738b1743914672b9048dd0ac0796d6733f8691d82b90a8907b894bf9b7c51fb" gracePeriod=600 Jan 26 11:08:15 crc kubenswrapper[4619]: I0126 11:08:15.290712 4619 generic.go:334] "Generic (PLEG): container finished" podID="f33a41bb-6406-4c73-8024-4acd72817832" containerID="6738b1743914672b9048dd0ac0796d6733f8691d82b90a8907b894bf9b7c51fb" exitCode=0 Jan 26 11:08:15 crc kubenswrapper[4619]: I0126 11:08:15.290807 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerDied","Data":"6738b1743914672b9048dd0ac0796d6733f8691d82b90a8907b894bf9b7c51fb"} Jan 26 11:08:15 crc kubenswrapper[4619]: I0126 11:08:15.291361 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerStarted","Data":"1c10eca96de3abc38af2c9c686eee98e2b56a0138fc48edb624175300fb0caff"} Jan 26 11:08:15 crc kubenswrapper[4619]: I0126 11:08:15.291387 4619 scope.go:117] "RemoveContainer" containerID="9df64a75c583c4fdad8b268bc330fc4096084cc7d46cd0ce53eaf7504d309d1e" Jan 26 11:08:15 crc kubenswrapper[4619]: I0126 11:08:15.307681 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-f5d78644f-q9qtr" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.092092 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-sfz7r"] Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.094357 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-sfz7r" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.097407 4619 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-c6lfk" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.103674 4619 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.103749 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.112162 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9681692a-bb51-4dce-aa10-c85852bff137-frr-sockets\") pod \"frr-k8s-sfz7r\" (UID: \"9681692a-bb51-4dce-aa10-c85852bff137\") " pod="metallb-system/frr-k8s-sfz7r" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.112289 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9681692a-bb51-4dce-aa10-c85852bff137-metrics-certs\") pod \"frr-k8s-sfz7r\" (UID: \"9681692a-bb51-4dce-aa10-c85852bff137\") " pod="metallb-system/frr-k8s-sfz7r" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.112340 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9681692a-bb51-4dce-aa10-c85852bff137-metrics\") pod \"frr-k8s-sfz7r\" (UID: \"9681692a-bb51-4dce-aa10-c85852bff137\") " pod="metallb-system/frr-k8s-sfz7r" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.112360 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9681692a-bb51-4dce-aa10-c85852bff137-reloader\") pod \"frr-k8s-sfz7r\" (UID: \"9681692a-bb51-4dce-aa10-c85852bff137\") " pod="metallb-system/frr-k8s-sfz7r" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.112482 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9681692a-bb51-4dce-aa10-c85852bff137-frr-conf\") pod \"frr-k8s-sfz7r\" (UID: \"9681692a-bb51-4dce-aa10-c85852bff137\") " pod="metallb-system/frr-k8s-sfz7r" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.112529 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xtrl\" (UniqueName: \"kubernetes.io/projected/9681692a-bb51-4dce-aa10-c85852bff137-kube-api-access-4xtrl\") pod \"frr-k8s-sfz7r\" (UID: \"9681692a-bb51-4dce-aa10-c85852bff137\") " pod="metallb-system/frr-k8s-sfz7r" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.112576 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9681692a-bb51-4dce-aa10-c85852bff137-frr-startup\") pod \"frr-k8s-sfz7r\" (UID: \"9681692a-bb51-4dce-aa10-c85852bff137\") " pod="metallb-system/frr-k8s-sfz7r" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.117803 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-jwd5k"] Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.118768 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jwd5k" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.121068 4619 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.129000 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-jwd5k"] Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.213324 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9681692a-bb51-4dce-aa10-c85852bff137-metrics\") pod \"frr-k8s-sfz7r\" (UID: \"9681692a-bb51-4dce-aa10-c85852bff137\") " pod="metallb-system/frr-k8s-sfz7r" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.213362 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9681692a-bb51-4dce-aa10-c85852bff137-reloader\") pod \"frr-k8s-sfz7r\" (UID: \"9681692a-bb51-4dce-aa10-c85852bff137\") " pod="metallb-system/frr-k8s-sfz7r" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.213385 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c511ad3e-52ab-4e39-bbed-f795da1b29e8-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-jwd5k\" (UID: \"c511ad3e-52ab-4e39-bbed-f795da1b29e8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jwd5k" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.213410 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9681692a-bb51-4dce-aa10-c85852bff137-frr-conf\") pod \"frr-k8s-sfz7r\" (UID: \"9681692a-bb51-4dce-aa10-c85852bff137\") " pod="metallb-system/frr-k8s-sfz7r" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.213432 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xtrl\" (UniqueName: \"kubernetes.io/projected/9681692a-bb51-4dce-aa10-c85852bff137-kube-api-access-4xtrl\") pod \"frr-k8s-sfz7r\" (UID: \"9681692a-bb51-4dce-aa10-c85852bff137\") " pod="metallb-system/frr-k8s-sfz7r" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.213458 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9681692a-bb51-4dce-aa10-c85852bff137-frr-startup\") pod \"frr-k8s-sfz7r\" (UID: \"9681692a-bb51-4dce-aa10-c85852bff137\") " pod="metallb-system/frr-k8s-sfz7r" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.213482 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9681692a-bb51-4dce-aa10-c85852bff137-frr-sockets\") pod \"frr-k8s-sfz7r\" (UID: \"9681692a-bb51-4dce-aa10-c85852bff137\") " pod="metallb-system/frr-k8s-sfz7r" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.213517 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9681692a-bb51-4dce-aa10-c85852bff137-metrics-certs\") pod \"frr-k8s-sfz7r\" (UID: \"9681692a-bb51-4dce-aa10-c85852bff137\") " pod="metallb-system/frr-k8s-sfz7r" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.213535 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9295j\" (UniqueName: \"kubernetes.io/projected/c511ad3e-52ab-4e39-bbed-f795da1b29e8-kube-api-access-9295j\") pod \"frr-k8s-webhook-server-7df86c4f6c-jwd5k\" (UID: \"c511ad3e-52ab-4e39-bbed-f795da1b29e8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jwd5k" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.214283 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9681692a-bb51-4dce-aa10-c85852bff137-frr-conf\") pod \"frr-k8s-sfz7r\" (UID: \"9681692a-bb51-4dce-aa10-c85852bff137\") " pod="metallb-system/frr-k8s-sfz7r" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.214578 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9681692a-bb51-4dce-aa10-c85852bff137-metrics\") pod \"frr-k8s-sfz7r\" (UID: \"9681692a-bb51-4dce-aa10-c85852bff137\") " pod="metallb-system/frr-k8s-sfz7r" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.214745 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9681692a-bb51-4dce-aa10-c85852bff137-frr-startup\") pod \"frr-k8s-sfz7r\" (UID: \"9681692a-bb51-4dce-aa10-c85852bff137\") " pod="metallb-system/frr-k8s-sfz7r" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.214822 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9681692a-bb51-4dce-aa10-c85852bff137-reloader\") pod \"frr-k8s-sfz7r\" (UID: \"9681692a-bb51-4dce-aa10-c85852bff137\") " pod="metallb-system/frr-k8s-sfz7r" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.215063 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9681692a-bb51-4dce-aa10-c85852bff137-frr-sockets\") pod \"frr-k8s-sfz7r\" (UID: \"9681692a-bb51-4dce-aa10-c85852bff137\") " pod="metallb-system/frr-k8s-sfz7r" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.223396 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9681692a-bb51-4dce-aa10-c85852bff137-metrics-certs\") pod \"frr-k8s-sfz7r\" (UID: \"9681692a-bb51-4dce-aa10-c85852bff137\") " pod="metallb-system/frr-k8s-sfz7r" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.232207 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-fdh28"] Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.234977 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-fdh28" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.235350 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xtrl\" (UniqueName: \"kubernetes.io/projected/9681692a-bb51-4dce-aa10-c85852bff137-kube-api-access-4xtrl\") pod \"frr-k8s-sfz7r\" (UID: \"9681692a-bb51-4dce-aa10-c85852bff137\") " pod="metallb-system/frr-k8s-sfz7r" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.251947 4619 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.251980 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.252135 4619 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.252357 4619 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-vhdrq" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.292222 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-nz7bv"] Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.293195 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-nz7bv" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.297282 4619 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.315092 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a3fb0354-e5ca-4c6c-a008-44355d8dd331-metrics-certs\") pod \"speaker-fdh28\" (UID: \"a3fb0354-e5ca-4c6c-a008-44355d8dd331\") " pod="metallb-system/speaker-fdh28" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.315162 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9295j\" (UniqueName: \"kubernetes.io/projected/c511ad3e-52ab-4e39-bbed-f795da1b29e8-kube-api-access-9295j\") pod \"frr-k8s-webhook-server-7df86c4f6c-jwd5k\" (UID: \"c511ad3e-52ab-4e39-bbed-f795da1b29e8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jwd5k" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.315199 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a3fb0354-e5ca-4c6c-a008-44355d8dd331-metallb-excludel2\") pod \"speaker-fdh28\" (UID: \"a3fb0354-e5ca-4c6c-a008-44355d8dd331\") " pod="metallb-system/speaker-fdh28" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.315218 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c511ad3e-52ab-4e39-bbed-f795da1b29e8-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-jwd5k\" (UID: \"c511ad3e-52ab-4e39-bbed-f795da1b29e8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jwd5k" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.315256 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/72d807ee-cc40-44fc-b153-c36c4bb75332-cert\") pod \"controller-6968d8fdc4-nz7bv\" (UID: \"72d807ee-cc40-44fc-b153-c36c4bb75332\") " pod="metallb-system/controller-6968d8fdc4-nz7bv" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.315274 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a3fb0354-e5ca-4c6c-a008-44355d8dd331-memberlist\") pod \"speaker-fdh28\" (UID: \"a3fb0354-e5ca-4c6c-a008-44355d8dd331\") " pod="metallb-system/speaker-fdh28" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.315306 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvh6q\" (UniqueName: \"kubernetes.io/projected/72d807ee-cc40-44fc-b153-c36c4bb75332-kube-api-access-zvh6q\") pod \"controller-6968d8fdc4-nz7bv\" (UID: \"72d807ee-cc40-44fc-b153-c36c4bb75332\") " pod="metallb-system/controller-6968d8fdc4-nz7bv" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.315348 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq8mj\" (UniqueName: \"kubernetes.io/projected/a3fb0354-e5ca-4c6c-a008-44355d8dd331-kube-api-access-mq8mj\") pod \"speaker-fdh28\" (UID: \"a3fb0354-e5ca-4c6c-a008-44355d8dd331\") " pod="metallb-system/speaker-fdh28" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.315376 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72d807ee-cc40-44fc-b153-c36c4bb75332-metrics-certs\") pod \"controller-6968d8fdc4-nz7bv\" (UID: \"72d807ee-cc40-44fc-b153-c36c4bb75332\") " pod="metallb-system/controller-6968d8fdc4-nz7bv" Jan 26 11:08:16 crc kubenswrapper[4619]: E0126 11:08:16.315793 4619 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 26 11:08:16 crc kubenswrapper[4619]: E0126 11:08:16.315867 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c511ad3e-52ab-4e39-bbed-f795da1b29e8-cert podName:c511ad3e-52ab-4e39-bbed-f795da1b29e8 nodeName:}" failed. No retries permitted until 2026-01-26 11:08:16.815849406 +0000 UTC m=+795.849890122 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c511ad3e-52ab-4e39-bbed-f795da1b29e8-cert") pod "frr-k8s-webhook-server-7df86c4f6c-jwd5k" (UID: "c511ad3e-52ab-4e39-bbed-f795da1b29e8") : secret "frr-k8s-webhook-server-cert" not found Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.325968 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-nz7bv"] Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.372753 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9295j\" (UniqueName: \"kubernetes.io/projected/c511ad3e-52ab-4e39-bbed-f795da1b29e8-kube-api-access-9295j\") pod \"frr-k8s-webhook-server-7df86c4f6c-jwd5k\" (UID: \"c511ad3e-52ab-4e39-bbed-f795da1b29e8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jwd5k" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.410901 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-sfz7r" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.416418 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/72d807ee-cc40-44fc-b153-c36c4bb75332-cert\") pod \"controller-6968d8fdc4-nz7bv\" (UID: \"72d807ee-cc40-44fc-b153-c36c4bb75332\") " pod="metallb-system/controller-6968d8fdc4-nz7bv" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.416456 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a3fb0354-e5ca-4c6c-a008-44355d8dd331-memberlist\") pod \"speaker-fdh28\" (UID: \"a3fb0354-e5ca-4c6c-a008-44355d8dd331\") " pod="metallb-system/speaker-fdh28" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.416484 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvh6q\" (UniqueName: \"kubernetes.io/projected/72d807ee-cc40-44fc-b153-c36c4bb75332-kube-api-access-zvh6q\") pod \"controller-6968d8fdc4-nz7bv\" (UID: \"72d807ee-cc40-44fc-b153-c36c4bb75332\") " pod="metallb-system/controller-6968d8fdc4-nz7bv" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.416516 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq8mj\" (UniqueName: \"kubernetes.io/projected/a3fb0354-e5ca-4c6c-a008-44355d8dd331-kube-api-access-mq8mj\") pod \"speaker-fdh28\" (UID: \"a3fb0354-e5ca-4c6c-a008-44355d8dd331\") " pod="metallb-system/speaker-fdh28" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.416534 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72d807ee-cc40-44fc-b153-c36c4bb75332-metrics-certs\") pod \"controller-6968d8fdc4-nz7bv\" (UID: \"72d807ee-cc40-44fc-b153-c36c4bb75332\") " pod="metallb-system/controller-6968d8fdc4-nz7bv" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.416557 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a3fb0354-e5ca-4c6c-a008-44355d8dd331-metrics-certs\") pod \"speaker-fdh28\" (UID: \"a3fb0354-e5ca-4c6c-a008-44355d8dd331\") " pod="metallb-system/speaker-fdh28" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.416593 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a3fb0354-e5ca-4c6c-a008-44355d8dd331-metallb-excludel2\") pod \"speaker-fdh28\" (UID: \"a3fb0354-e5ca-4c6c-a008-44355d8dd331\") " pod="metallb-system/speaker-fdh28" Jan 26 11:08:16 crc kubenswrapper[4619]: E0126 11:08:16.416817 4619 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 11:08:16 crc kubenswrapper[4619]: E0126 11:08:16.416894 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3fb0354-e5ca-4c6c-a008-44355d8dd331-memberlist podName:a3fb0354-e5ca-4c6c-a008-44355d8dd331 nodeName:}" failed. No retries permitted until 2026-01-26 11:08:16.916871039 +0000 UTC m=+795.950911755 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a3fb0354-e5ca-4c6c-a008-44355d8dd331-memberlist") pod "speaker-fdh28" (UID: "a3fb0354-e5ca-4c6c-a008-44355d8dd331") : secret "metallb-memberlist" not found Jan 26 11:08:16 crc kubenswrapper[4619]: E0126 11:08:16.416938 4619 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 26 11:08:16 crc kubenswrapper[4619]: E0126 11:08:16.416993 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72d807ee-cc40-44fc-b153-c36c4bb75332-metrics-certs podName:72d807ee-cc40-44fc-b153-c36c4bb75332 nodeName:}" failed. No retries permitted until 2026-01-26 11:08:16.916975871 +0000 UTC m=+795.951016587 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/72d807ee-cc40-44fc-b153-c36c4bb75332-metrics-certs") pod "controller-6968d8fdc4-nz7bv" (UID: "72d807ee-cc40-44fc-b153-c36c4bb75332") : secret "controller-certs-secret" not found Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.417238 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a3fb0354-e5ca-4c6c-a008-44355d8dd331-metallb-excludel2\") pod \"speaker-fdh28\" (UID: \"a3fb0354-e5ca-4c6c-a008-44355d8dd331\") " pod="metallb-system/speaker-fdh28" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.420822 4619 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.425106 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a3fb0354-e5ca-4c6c-a008-44355d8dd331-metrics-certs\") pod \"speaker-fdh28\" (UID: \"a3fb0354-e5ca-4c6c-a008-44355d8dd331\") " pod="metallb-system/speaker-fdh28" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.432339 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/72d807ee-cc40-44fc-b153-c36c4bb75332-cert\") pod \"controller-6968d8fdc4-nz7bv\" (UID: \"72d807ee-cc40-44fc-b153-c36c4bb75332\") " pod="metallb-system/controller-6968d8fdc4-nz7bv" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.467019 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvh6q\" (UniqueName: \"kubernetes.io/projected/72d807ee-cc40-44fc-b153-c36c4bb75332-kube-api-access-zvh6q\") pod \"controller-6968d8fdc4-nz7bv\" (UID: \"72d807ee-cc40-44fc-b153-c36c4bb75332\") " pod="metallb-system/controller-6968d8fdc4-nz7bv" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.470154 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq8mj\" (UniqueName: \"kubernetes.io/projected/a3fb0354-e5ca-4c6c-a008-44355d8dd331-kube-api-access-mq8mj\") pod \"speaker-fdh28\" (UID: \"a3fb0354-e5ca-4c6c-a008-44355d8dd331\") " pod="metallb-system/speaker-fdh28" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.821122 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c511ad3e-52ab-4e39-bbed-f795da1b29e8-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-jwd5k\" (UID: \"c511ad3e-52ab-4e39-bbed-f795da1b29e8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jwd5k" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.824635 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c511ad3e-52ab-4e39-bbed-f795da1b29e8-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-jwd5k\" (UID: \"c511ad3e-52ab-4e39-bbed-f795da1b29e8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jwd5k" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.922981 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a3fb0354-e5ca-4c6c-a008-44355d8dd331-memberlist\") pod \"speaker-fdh28\" (UID: \"a3fb0354-e5ca-4c6c-a008-44355d8dd331\") " pod="metallb-system/speaker-fdh28" Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.923080 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72d807ee-cc40-44fc-b153-c36c4bb75332-metrics-certs\") pod \"controller-6968d8fdc4-nz7bv\" (UID: \"72d807ee-cc40-44fc-b153-c36c4bb75332\") " pod="metallb-system/controller-6968d8fdc4-nz7bv" Jan 26 11:08:16 crc kubenswrapper[4619]: E0126 11:08:16.923858 4619 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 11:08:16 crc kubenswrapper[4619]: E0126 11:08:16.923958 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3fb0354-e5ca-4c6c-a008-44355d8dd331-memberlist podName:a3fb0354-e5ca-4c6c-a008-44355d8dd331 nodeName:}" failed. No retries permitted until 2026-01-26 11:08:17.923934419 +0000 UTC m=+796.957975135 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a3fb0354-e5ca-4c6c-a008-44355d8dd331-memberlist") pod "speaker-fdh28" (UID: "a3fb0354-e5ca-4c6c-a008-44355d8dd331") : secret "metallb-memberlist" not found Jan 26 11:08:16 crc kubenswrapper[4619]: I0126 11:08:16.940662 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72d807ee-cc40-44fc-b153-c36c4bb75332-metrics-certs\") pod \"controller-6968d8fdc4-nz7bv\" (UID: \"72d807ee-cc40-44fc-b153-c36c4bb75332\") " pod="metallb-system/controller-6968d8fdc4-nz7bv" Jan 26 11:08:17 crc kubenswrapper[4619]: I0126 11:08:17.030341 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jwd5k" Jan 26 11:08:17 crc kubenswrapper[4619]: I0126 11:08:17.206762 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-nz7bv" Jan 26 11:08:17 crc kubenswrapper[4619]: I0126 11:08:17.311748 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-jwd5k"] Jan 26 11:08:17 crc kubenswrapper[4619]: I0126 11:08:17.318491 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sfz7r" event={"ID":"9681692a-bb51-4dce-aa10-c85852bff137","Type":"ContainerStarted","Data":"0875926fd05663354ea3d57f512de0c41963c05671179805e2fa38fecdaa9fd4"} Jan 26 11:08:17 crc kubenswrapper[4619]: I0126 11:08:17.647247 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-nz7bv"] Jan 26 11:08:17 crc kubenswrapper[4619]: I0126 11:08:17.948944 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a3fb0354-e5ca-4c6c-a008-44355d8dd331-memberlist\") pod \"speaker-fdh28\" (UID: \"a3fb0354-e5ca-4c6c-a008-44355d8dd331\") " pod="metallb-system/speaker-fdh28" Jan 26 11:08:17 crc kubenswrapper[4619]: I0126 11:08:17.958668 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a3fb0354-e5ca-4c6c-a008-44355d8dd331-memberlist\") pod \"speaker-fdh28\" (UID: \"a3fb0354-e5ca-4c6c-a008-44355d8dd331\") " pod="metallb-system/speaker-fdh28" Jan 26 11:08:18 crc kubenswrapper[4619]: I0126 11:08:18.080996 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-fdh28" Jan 26 11:08:18 crc kubenswrapper[4619]: W0126 11:08:18.103952 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3fb0354_e5ca_4c6c_a008_44355d8dd331.slice/crio-9d90527be10da1e592d0b91c3eb79c701c9c80a4f007ed7201a0396286c39f2b WatchSource:0}: Error finding container 9d90527be10da1e592d0b91c3eb79c701c9c80a4f007ed7201a0396286c39f2b: Status 404 returned error can't find the container with id 9d90527be10da1e592d0b91c3eb79c701c9c80a4f007ed7201a0396286c39f2b Jan 26 11:08:18 crc kubenswrapper[4619]: I0126 11:08:18.324687 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fdh28" event={"ID":"a3fb0354-e5ca-4c6c-a008-44355d8dd331","Type":"ContainerStarted","Data":"9d90527be10da1e592d0b91c3eb79c701c9c80a4f007ed7201a0396286c39f2b"} Jan 26 11:08:18 crc kubenswrapper[4619]: I0126 11:08:18.333373 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jwd5k" event={"ID":"c511ad3e-52ab-4e39-bbed-f795da1b29e8","Type":"ContainerStarted","Data":"e2af2170af313b12d8f530cbe63b74c274650058724c2472327e4ff22e27580c"} Jan 26 11:08:18 crc kubenswrapper[4619]: I0126 11:08:18.349732 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-nz7bv" event={"ID":"72d807ee-cc40-44fc-b153-c36c4bb75332","Type":"ContainerStarted","Data":"6e04ed37912f7d152217f543df23b61b1ee05f90866f67540f9296cee6c6bf8b"} Jan 26 11:08:18 crc kubenswrapper[4619]: I0126 11:08:18.349772 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-nz7bv" event={"ID":"72d807ee-cc40-44fc-b153-c36c4bb75332","Type":"ContainerStarted","Data":"1733bbc19611845f449b5c4027045ebaf714c4f39c3396436bd2b41b93053f5e"} Jan 26 11:08:18 crc kubenswrapper[4619]: I0126 11:08:18.349789 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-nz7bv" event={"ID":"72d807ee-cc40-44fc-b153-c36c4bb75332","Type":"ContainerStarted","Data":"b05935b8476e432d3ebf3819ea044aa13189f05b2f1cc2b7bba733a6cedcf398"} Jan 26 11:08:18 crc kubenswrapper[4619]: I0126 11:08:18.350680 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-nz7bv" Jan 26 11:08:18 crc kubenswrapper[4619]: I0126 11:08:18.370131 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-nz7bv" podStartSLOduration=2.370114828 podStartE2EDuration="2.370114828s" podCreationTimestamp="2026-01-26 11:08:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:08:18.367420203 +0000 UTC m=+797.401460919" watchObservedRunningTime="2026-01-26 11:08:18.370114828 +0000 UTC m=+797.404155544" Jan 26 11:08:19 crc kubenswrapper[4619]: I0126 11:08:19.399041 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fdh28" event={"ID":"a3fb0354-e5ca-4c6c-a008-44355d8dd331","Type":"ContainerStarted","Data":"55c095388e079bb4764713bfc41c3b3a585d88c5ff13a5b299d413a86305b63c"} Jan 26 11:08:19 crc kubenswrapper[4619]: I0126 11:08:19.399322 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fdh28" event={"ID":"a3fb0354-e5ca-4c6c-a008-44355d8dd331","Type":"ContainerStarted","Data":"4f8053c6d9ae7646186732bdd32b57d0c437c834280ab9bb9a4c6618c2fe827f"} Jan 26 11:08:19 crc kubenswrapper[4619]: I0126 11:08:19.421070 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-fdh28" podStartSLOduration=3.421052497 podStartE2EDuration="3.421052497s" podCreationTimestamp="2026-01-26 11:08:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:08:19.418175666 +0000 UTC m=+798.452216382" watchObservedRunningTime="2026-01-26 11:08:19.421052497 +0000 UTC m=+798.455093213" Jan 26 11:08:20 crc kubenswrapper[4619]: I0126 11:08:20.406072 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-fdh28" Jan 26 11:08:25 crc kubenswrapper[4619]: I0126 11:08:25.452216 4619 generic.go:334] "Generic (PLEG): container finished" podID="9681692a-bb51-4dce-aa10-c85852bff137" containerID="ccea6dc6cd36ca94d108f79819e5f687148bdae864daade3084246f1d3c375a4" exitCode=0 Jan 26 11:08:25 crc kubenswrapper[4619]: I0126 11:08:25.452293 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sfz7r" event={"ID":"9681692a-bb51-4dce-aa10-c85852bff137","Type":"ContainerDied","Data":"ccea6dc6cd36ca94d108f79819e5f687148bdae864daade3084246f1d3c375a4"} Jan 26 11:08:25 crc kubenswrapper[4619]: I0126 11:08:25.455543 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jwd5k" event={"ID":"c511ad3e-52ab-4e39-bbed-f795da1b29e8","Type":"ContainerStarted","Data":"b8f692f2dcda60b17f33b8d3a7ee72c94c55cdbbd1332dca731566fef7cd9596"} Jan 26 11:08:25 crc kubenswrapper[4619]: I0126 11:08:25.455785 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jwd5k" Jan 26 11:08:25 crc kubenswrapper[4619]: I0126 11:08:25.509819 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jwd5k" podStartSLOduration=1.863925755 podStartE2EDuration="9.509800235s" podCreationTimestamp="2026-01-26 11:08:16 +0000 UTC" firstStartedPulling="2026-01-26 11:08:17.349010103 +0000 UTC m=+796.383050819" lastFinishedPulling="2026-01-26 11:08:24.994884583 +0000 UTC m=+804.028925299" observedRunningTime="2026-01-26 11:08:25.505357139 +0000 UTC m=+804.539397855" watchObservedRunningTime="2026-01-26 11:08:25.509800235 +0000 UTC m=+804.543840961" Jan 26 11:08:26 crc kubenswrapper[4619]: I0126 11:08:26.464488 4619 generic.go:334] "Generic (PLEG): container finished" podID="9681692a-bb51-4dce-aa10-c85852bff137" containerID="8b1e0ade3cbbe805340bd58a3be07ecd21543170d52626e9d8eeaeb587ca1fd2" exitCode=0 Jan 26 11:08:26 crc kubenswrapper[4619]: I0126 11:08:26.464584 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sfz7r" event={"ID":"9681692a-bb51-4dce-aa10-c85852bff137","Type":"ContainerDied","Data":"8b1e0ade3cbbe805340bd58a3be07ecd21543170d52626e9d8eeaeb587ca1fd2"} Jan 26 11:08:27 crc kubenswrapper[4619]: I0126 11:08:27.212439 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-nz7bv" Jan 26 11:08:27 crc kubenswrapper[4619]: I0126 11:08:27.475171 4619 generic.go:334] "Generic (PLEG): container finished" podID="9681692a-bb51-4dce-aa10-c85852bff137" containerID="9c3882ff29c62ed8f93ed9550cdd1299170351fb0b9d1153a78b88fc5323e4f4" exitCode=0 Jan 26 11:08:27 crc kubenswrapper[4619]: I0126 11:08:27.475239 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sfz7r" event={"ID":"9681692a-bb51-4dce-aa10-c85852bff137","Type":"ContainerDied","Data":"9c3882ff29c62ed8f93ed9550cdd1299170351fb0b9d1153a78b88fc5323e4f4"} Jan 26 11:08:28 crc kubenswrapper[4619]: I0126 11:08:28.084735 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-fdh28" Jan 26 11:08:28 crc kubenswrapper[4619]: I0126 11:08:28.488927 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sfz7r" event={"ID":"9681692a-bb51-4dce-aa10-c85852bff137","Type":"ContainerStarted","Data":"9cb2728ee6403df079f7f315e91637b5c722640d593abbb2a071b61de5218f96"} Jan 26 11:08:28 crc kubenswrapper[4619]: I0126 11:08:28.489243 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sfz7r" event={"ID":"9681692a-bb51-4dce-aa10-c85852bff137","Type":"ContainerStarted","Data":"54dfc61b9a7dc599cd9a704aaee1b57646548d8389187827da0bfc6cedcef01d"} Jan 26 11:08:29 crc kubenswrapper[4619]: I0126 11:08:29.505546 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sfz7r" event={"ID":"9681692a-bb51-4dce-aa10-c85852bff137","Type":"ContainerStarted","Data":"f3bb1576292339ac5fa72b6d9bfde5f7f8c76b066b7bb34e7764a1bc148e5184"} Jan 26 11:08:29 crc kubenswrapper[4619]: I0126 11:08:29.506181 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sfz7r" event={"ID":"9681692a-bb51-4dce-aa10-c85852bff137","Type":"ContainerStarted","Data":"ea3de3cbb7b39061a080056d5bba49f074202165c8c18e142e0d8fe088525edd"} Jan 26 11:08:29 crc kubenswrapper[4619]: I0126 11:08:29.506251 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-sfz7r" Jan 26 11:08:29 crc kubenswrapper[4619]: I0126 11:08:29.506271 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sfz7r" event={"ID":"9681692a-bb51-4dce-aa10-c85852bff137","Type":"ContainerStarted","Data":"225fc9de18982b1a23a11357a1070c32ea9a08743febcf8ef307b7c2b7b97c32"} Jan 26 11:08:29 crc kubenswrapper[4619]: I0126 11:08:29.506287 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sfz7r" event={"ID":"9681692a-bb51-4dce-aa10-c85852bff137","Type":"ContainerStarted","Data":"f534d2cc7f4a76b0fab17323eca4ed185e30b1b13ee8f879fc5d0da10b016dfe"} Jan 26 11:08:29 crc kubenswrapper[4619]: I0126 11:08:29.535155 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-sfz7r" podStartSLOduration=5.171210263 podStartE2EDuration="13.535140531s" podCreationTimestamp="2026-01-26 11:08:16 +0000 UTC" firstStartedPulling="2026-01-26 11:08:16.603135809 +0000 UTC m=+795.637176525" lastFinishedPulling="2026-01-26 11:08:24.967066077 +0000 UTC m=+804.001106793" observedRunningTime="2026-01-26 11:08:29.531539619 +0000 UTC m=+808.565580345" watchObservedRunningTime="2026-01-26 11:08:29.535140531 +0000 UTC m=+808.569181237" Jan 26 11:08:31 crc kubenswrapper[4619]: I0126 11:08:31.043547 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-glsrq"] Jan 26 11:08:31 crc kubenswrapper[4619]: I0126 11:08:31.044661 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-glsrq" Jan 26 11:08:31 crc kubenswrapper[4619]: I0126 11:08:31.046895 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 26 11:08:31 crc kubenswrapper[4619]: I0126 11:08:31.047120 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-cnsln" Jan 26 11:08:31 crc kubenswrapper[4619]: I0126 11:08:31.047540 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 26 11:08:31 crc kubenswrapper[4619]: I0126 11:08:31.053588 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-glsrq"] Jan 26 11:08:31 crc kubenswrapper[4619]: I0126 11:08:31.095802 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p44g\" (UniqueName: \"kubernetes.io/projected/51c592af-43b7-4753-a563-5312e0e921eb-kube-api-access-9p44g\") pod \"openstack-operator-index-glsrq\" (UID: \"51c592af-43b7-4753-a563-5312e0e921eb\") " pod="openstack-operators/openstack-operator-index-glsrq" Jan 26 11:08:31 crc kubenswrapper[4619]: I0126 11:08:31.196658 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p44g\" (UniqueName: \"kubernetes.io/projected/51c592af-43b7-4753-a563-5312e0e921eb-kube-api-access-9p44g\") pod \"openstack-operator-index-glsrq\" (UID: \"51c592af-43b7-4753-a563-5312e0e921eb\") " pod="openstack-operators/openstack-operator-index-glsrq" Jan 26 11:08:31 crc kubenswrapper[4619]: I0126 11:08:31.231374 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p44g\" (UniqueName: \"kubernetes.io/projected/51c592af-43b7-4753-a563-5312e0e921eb-kube-api-access-9p44g\") pod \"openstack-operator-index-glsrq\" (UID: \"51c592af-43b7-4753-a563-5312e0e921eb\") " pod="openstack-operators/openstack-operator-index-glsrq" Jan 26 11:08:31 crc kubenswrapper[4619]: I0126 11:08:31.366503 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-glsrq" Jan 26 11:08:31 crc kubenswrapper[4619]: I0126 11:08:31.416439 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-sfz7r" Jan 26 11:08:31 crc kubenswrapper[4619]: I0126 11:08:31.454973 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-sfz7r" Jan 26 11:08:31 crc kubenswrapper[4619]: I0126 11:08:31.856134 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-glsrq"] Jan 26 11:08:31 crc kubenswrapper[4619]: W0126 11:08:31.859091 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c592af_43b7_4753_a563_5312e0e921eb.slice/crio-726f984c3542eec55f52260bdcc6bd20a4164fe0bb4ca09a0a5aed2f222d18aa WatchSource:0}: Error finding container 726f984c3542eec55f52260bdcc6bd20a4164fe0bb4ca09a0a5aed2f222d18aa: Status 404 returned error can't find the container with id 726f984c3542eec55f52260bdcc6bd20a4164fe0bb4ca09a0a5aed2f222d18aa Jan 26 11:08:32 crc kubenswrapper[4619]: I0126 11:08:32.535076 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-glsrq" event={"ID":"51c592af-43b7-4753-a563-5312e0e921eb","Type":"ContainerStarted","Data":"726f984c3542eec55f52260bdcc6bd20a4164fe0bb4ca09a0a5aed2f222d18aa"} Jan 26 11:08:34 crc kubenswrapper[4619]: I0126 11:08:34.410352 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-glsrq"] Jan 26 11:08:35 crc kubenswrapper[4619]: I0126 11:08:35.014707 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-xm4c9"] Jan 26 11:08:35 crc kubenswrapper[4619]: I0126 11:08:35.015869 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xm4c9" Jan 26 11:08:35 crc kubenswrapper[4619]: I0126 11:08:35.026697 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xm4c9"] Jan 26 11:08:35 crc kubenswrapper[4619]: I0126 11:08:35.149270 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqpfz\" (UniqueName: \"kubernetes.io/projected/c89269ce-7325-4368-8653-48d35a50ee0b-kube-api-access-mqpfz\") pod \"openstack-operator-index-xm4c9\" (UID: \"c89269ce-7325-4368-8653-48d35a50ee0b\") " pod="openstack-operators/openstack-operator-index-xm4c9" Jan 26 11:08:35 crc kubenswrapper[4619]: I0126 11:08:35.250690 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqpfz\" (UniqueName: \"kubernetes.io/projected/c89269ce-7325-4368-8653-48d35a50ee0b-kube-api-access-mqpfz\") pod \"openstack-operator-index-xm4c9\" (UID: \"c89269ce-7325-4368-8653-48d35a50ee0b\") " pod="openstack-operators/openstack-operator-index-xm4c9" Jan 26 11:08:35 crc kubenswrapper[4619]: I0126 11:08:35.269394 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqpfz\" (UniqueName: \"kubernetes.io/projected/c89269ce-7325-4368-8653-48d35a50ee0b-kube-api-access-mqpfz\") pod \"openstack-operator-index-xm4c9\" (UID: \"c89269ce-7325-4368-8653-48d35a50ee0b\") " pod="openstack-operators/openstack-operator-index-xm4c9" Jan 26 11:08:35 crc kubenswrapper[4619]: I0126 11:08:35.337311 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xm4c9" Jan 26 11:08:35 crc kubenswrapper[4619]: I0126 11:08:35.553384 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-glsrq" event={"ID":"51c592af-43b7-4753-a563-5312e0e921eb","Type":"ContainerStarted","Data":"63acd2c36b7e5a4e623210e96b59c98239174c0fa969f01081e6807f04537cf1"} Jan 26 11:08:35 crc kubenswrapper[4619]: I0126 11:08:35.553507 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-glsrq" podUID="51c592af-43b7-4753-a563-5312e0e921eb" containerName="registry-server" containerID="cri-o://63acd2c36b7e5a4e623210e96b59c98239174c0fa969f01081e6807f04537cf1" gracePeriod=2 Jan 26 11:08:35 crc kubenswrapper[4619]: I0126 11:08:35.574791 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-glsrq" podStartSLOduration=1.701756415 podStartE2EDuration="4.5747705s" podCreationTimestamp="2026-01-26 11:08:31 +0000 UTC" firstStartedPulling="2026-01-26 11:08:31.864489191 +0000 UTC m=+810.898529907" lastFinishedPulling="2026-01-26 11:08:34.737503286 +0000 UTC m=+813.771543992" observedRunningTime="2026-01-26 11:08:35.572393544 +0000 UTC m=+814.606434290" watchObservedRunningTime="2026-01-26 11:08:35.5747705 +0000 UTC m=+814.608811216" Jan 26 11:08:35 crc kubenswrapper[4619]: I0126 11:08:35.732996 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xm4c9"] Jan 26 11:08:35 crc kubenswrapper[4619]: W0126 11:08:35.742969 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc89269ce_7325_4368_8653_48d35a50ee0b.slice/crio-514076fc57a2af6ad24ec000401d78b83d235c27665485f5244624903d32df89 WatchSource:0}: Error finding container 514076fc57a2af6ad24ec000401d78b83d235c27665485f5244624903d32df89: Status 404 returned error can't find the container with id 514076fc57a2af6ad24ec000401d78b83d235c27665485f5244624903d32df89 Jan 26 11:08:35 crc kubenswrapper[4619]: I0126 11:08:35.995537 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-glsrq" Jan 26 11:08:36 crc kubenswrapper[4619]: I0126 11:08:36.161537 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p44g\" (UniqueName: \"kubernetes.io/projected/51c592af-43b7-4753-a563-5312e0e921eb-kube-api-access-9p44g\") pod \"51c592af-43b7-4753-a563-5312e0e921eb\" (UID: \"51c592af-43b7-4753-a563-5312e0e921eb\") " Jan 26 11:08:36 crc kubenswrapper[4619]: I0126 11:08:36.172801 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51c592af-43b7-4753-a563-5312e0e921eb-kube-api-access-9p44g" (OuterVolumeSpecName: "kube-api-access-9p44g") pod "51c592af-43b7-4753-a563-5312e0e921eb" (UID: "51c592af-43b7-4753-a563-5312e0e921eb"). InnerVolumeSpecName "kube-api-access-9p44g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:08:36 crc kubenswrapper[4619]: I0126 11:08:36.262719 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9p44g\" (UniqueName: \"kubernetes.io/projected/51c592af-43b7-4753-a563-5312e0e921eb-kube-api-access-9p44g\") on node \"crc\" DevicePath \"\"" Jan 26 11:08:36 crc kubenswrapper[4619]: I0126 11:08:36.560050 4619 generic.go:334] "Generic (PLEG): container finished" podID="51c592af-43b7-4753-a563-5312e0e921eb" containerID="63acd2c36b7e5a4e623210e96b59c98239174c0fa969f01081e6807f04537cf1" exitCode=0 Jan 26 11:08:36 crc kubenswrapper[4619]: I0126 11:08:36.560139 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-glsrq" Jan 26 11:08:36 crc kubenswrapper[4619]: I0126 11:08:36.563743 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-glsrq" event={"ID":"51c592af-43b7-4753-a563-5312e0e921eb","Type":"ContainerDied","Data":"63acd2c36b7e5a4e623210e96b59c98239174c0fa969f01081e6807f04537cf1"} Jan 26 11:08:36 crc kubenswrapper[4619]: I0126 11:08:36.563807 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-glsrq" event={"ID":"51c592af-43b7-4753-a563-5312e0e921eb","Type":"ContainerDied","Data":"726f984c3542eec55f52260bdcc6bd20a4164fe0bb4ca09a0a5aed2f222d18aa"} Jan 26 11:08:36 crc kubenswrapper[4619]: I0126 11:08:36.563826 4619 scope.go:117] "RemoveContainer" containerID="63acd2c36b7e5a4e623210e96b59c98239174c0fa969f01081e6807f04537cf1" Jan 26 11:08:36 crc kubenswrapper[4619]: I0126 11:08:36.566823 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xm4c9" event={"ID":"c89269ce-7325-4368-8653-48d35a50ee0b","Type":"ContainerStarted","Data":"4d9fb69d010624f76921c5ee302a9c7c3854c3910ff2ee8738385e51826b6595"} Jan 26 11:08:36 crc kubenswrapper[4619]: I0126 11:08:36.566856 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xm4c9" event={"ID":"c89269ce-7325-4368-8653-48d35a50ee0b","Type":"ContainerStarted","Data":"514076fc57a2af6ad24ec000401d78b83d235c27665485f5244624903d32df89"} Jan 26 11:08:36 crc kubenswrapper[4619]: I0126 11:08:36.583690 4619 scope.go:117] "RemoveContainer" containerID="63acd2c36b7e5a4e623210e96b59c98239174c0fa969f01081e6807f04537cf1" Jan 26 11:08:36 crc kubenswrapper[4619]: I0126 11:08:36.586970 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-xm4c9" podStartSLOduration=1.498353542 podStartE2EDuration="1.586952184s" podCreationTimestamp="2026-01-26 11:08:35 +0000 UTC" firstStartedPulling="2026-01-26 11:08:35.749146375 +0000 UTC m=+814.783187091" lastFinishedPulling="2026-01-26 11:08:35.837745017 +0000 UTC m=+814.871785733" observedRunningTime="2026-01-26 11:08:36.58395875 +0000 UTC m=+815.617999456" watchObservedRunningTime="2026-01-26 11:08:36.586952184 +0000 UTC m=+815.620992900" Jan 26 11:08:36 crc kubenswrapper[4619]: E0126 11:08:36.588713 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63acd2c36b7e5a4e623210e96b59c98239174c0fa969f01081e6807f04537cf1\": container with ID starting with 63acd2c36b7e5a4e623210e96b59c98239174c0fa969f01081e6807f04537cf1 not found: ID does not exist" containerID="63acd2c36b7e5a4e623210e96b59c98239174c0fa969f01081e6807f04537cf1" Jan 26 11:08:36 crc kubenswrapper[4619]: I0126 11:08:36.588742 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63acd2c36b7e5a4e623210e96b59c98239174c0fa969f01081e6807f04537cf1"} err="failed to get container status \"63acd2c36b7e5a4e623210e96b59c98239174c0fa969f01081e6807f04537cf1\": rpc error: code = NotFound desc = could not find container \"63acd2c36b7e5a4e623210e96b59c98239174c0fa969f01081e6807f04537cf1\": container with ID starting with 63acd2c36b7e5a4e623210e96b59c98239174c0fa969f01081e6807f04537cf1 not found: ID does not exist" Jan 26 11:08:36 crc kubenswrapper[4619]: I0126 11:08:36.600679 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-glsrq"] Jan 26 11:08:36 crc kubenswrapper[4619]: I0126 11:08:36.604871 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-glsrq"] Jan 26 11:08:37 crc kubenswrapper[4619]: I0126 11:08:37.040908 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jwd5k" Jan 26 11:08:37 crc kubenswrapper[4619]: I0126 11:08:37.268726 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51c592af-43b7-4753-a563-5312e0e921eb" path="/var/lib/kubelet/pods/51c592af-43b7-4753-a563-5312e0e921eb/volumes" Jan 26 11:08:45 crc kubenswrapper[4619]: I0126 11:08:45.338424 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-xm4c9" Jan 26 11:08:45 crc kubenswrapper[4619]: I0126 11:08:45.338967 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-xm4c9" Jan 26 11:08:45 crc kubenswrapper[4619]: I0126 11:08:45.362198 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-xm4c9" Jan 26 11:08:45 crc kubenswrapper[4619]: I0126 11:08:45.649214 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-xm4c9" Jan 26 11:08:46 crc kubenswrapper[4619]: I0126 11:08:46.414339 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-sfz7r" Jan 26 11:08:51 crc kubenswrapper[4619]: I0126 11:08:51.213582 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck"] Jan 26 11:08:51 crc kubenswrapper[4619]: E0126 11:08:51.214167 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51c592af-43b7-4753-a563-5312e0e921eb" containerName="registry-server" Jan 26 11:08:51 crc kubenswrapper[4619]: I0126 11:08:51.214181 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="51c592af-43b7-4753-a563-5312e0e921eb" containerName="registry-server" Jan 26 11:08:51 crc kubenswrapper[4619]: I0126 11:08:51.214335 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="51c592af-43b7-4753-a563-5312e0e921eb" containerName="registry-server" Jan 26 11:08:51 crc kubenswrapper[4619]: I0126 11:08:51.215290 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck" Jan 26 11:08:51 crc kubenswrapper[4619]: I0126 11:08:51.217825 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-q7j5z" Jan 26 11:08:51 crc kubenswrapper[4619]: I0126 11:08:51.233988 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck"] Jan 26 11:08:51 crc kubenswrapper[4619]: I0126 11:08:51.390090 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/70360edb-1325-42c3-9ffd-05d030d21375-util\") pod \"5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck\" (UID: \"70360edb-1325-42c3-9ffd-05d030d21375\") " pod="openstack-operators/5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck" Jan 26 11:08:51 crc kubenswrapper[4619]: I0126 11:08:51.390173 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx99r\" (UniqueName: \"kubernetes.io/projected/70360edb-1325-42c3-9ffd-05d030d21375-kube-api-access-kx99r\") pod \"5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck\" (UID: \"70360edb-1325-42c3-9ffd-05d030d21375\") " pod="openstack-operators/5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck" Jan 26 11:08:51 crc kubenswrapper[4619]: I0126 11:08:51.390271 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/70360edb-1325-42c3-9ffd-05d030d21375-bundle\") pod \"5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck\" (UID: \"70360edb-1325-42c3-9ffd-05d030d21375\") " pod="openstack-operators/5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck" Jan 26 11:08:51 crc kubenswrapper[4619]: I0126 11:08:51.491652 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/70360edb-1325-42c3-9ffd-05d030d21375-util\") pod \"5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck\" (UID: \"70360edb-1325-42c3-9ffd-05d030d21375\") " pod="openstack-operators/5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck" Jan 26 11:08:51 crc kubenswrapper[4619]: I0126 11:08:51.491714 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx99r\" (UniqueName: \"kubernetes.io/projected/70360edb-1325-42c3-9ffd-05d030d21375-kube-api-access-kx99r\") pod \"5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck\" (UID: \"70360edb-1325-42c3-9ffd-05d030d21375\") " pod="openstack-operators/5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck" Jan 26 11:08:51 crc kubenswrapper[4619]: I0126 11:08:51.491761 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/70360edb-1325-42c3-9ffd-05d030d21375-bundle\") pod \"5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck\" (UID: \"70360edb-1325-42c3-9ffd-05d030d21375\") " pod="openstack-operators/5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck" Jan 26 11:08:51 crc kubenswrapper[4619]: I0126 11:08:51.492119 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/70360edb-1325-42c3-9ffd-05d030d21375-bundle\") pod \"5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck\" (UID: \"70360edb-1325-42c3-9ffd-05d030d21375\") " pod="openstack-operators/5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck" Jan 26 11:08:51 crc kubenswrapper[4619]: I0126 11:08:51.492337 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/70360edb-1325-42c3-9ffd-05d030d21375-util\") pod \"5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck\" (UID: \"70360edb-1325-42c3-9ffd-05d030d21375\") " pod="openstack-operators/5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck" Jan 26 11:08:51 crc kubenswrapper[4619]: I0126 11:08:51.521425 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx99r\" (UniqueName: \"kubernetes.io/projected/70360edb-1325-42c3-9ffd-05d030d21375-kube-api-access-kx99r\") pod \"5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck\" (UID: \"70360edb-1325-42c3-9ffd-05d030d21375\") " pod="openstack-operators/5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck" Jan 26 11:08:51 crc kubenswrapper[4619]: I0126 11:08:51.537832 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck" Jan 26 11:08:51 crc kubenswrapper[4619]: I0126 11:08:51.833312 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck"] Jan 26 11:08:52 crc kubenswrapper[4619]: I0126 11:08:52.671092 4619 generic.go:334] "Generic (PLEG): container finished" podID="70360edb-1325-42c3-9ffd-05d030d21375" containerID="c98a75ee679e25fdf2b4e1e06165f8291402909df39442d9d9d641931d83eeb5" exitCode=0 Jan 26 11:08:52 crc kubenswrapper[4619]: I0126 11:08:52.671144 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck" event={"ID":"70360edb-1325-42c3-9ffd-05d030d21375","Type":"ContainerDied","Data":"c98a75ee679e25fdf2b4e1e06165f8291402909df39442d9d9d641931d83eeb5"} Jan 26 11:08:52 crc kubenswrapper[4619]: I0126 11:08:52.671178 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck" event={"ID":"70360edb-1325-42c3-9ffd-05d030d21375","Type":"ContainerStarted","Data":"0657acc0698b9b4ab5de6aefcfdbe5b235a98c6a08e57533157bc5aae83a814e"} Jan 26 11:08:53 crc kubenswrapper[4619]: I0126 11:08:53.680561 4619 generic.go:334] "Generic (PLEG): container finished" podID="70360edb-1325-42c3-9ffd-05d030d21375" containerID="0b242856d5f3da9e5ffd2a83207d10267a9e8cee90dfe5a40ed91fe5acd5fb77" exitCode=0 Jan 26 11:08:53 crc kubenswrapper[4619]: I0126 11:08:53.681113 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck" event={"ID":"70360edb-1325-42c3-9ffd-05d030d21375","Type":"ContainerDied","Data":"0b242856d5f3da9e5ffd2a83207d10267a9e8cee90dfe5a40ed91fe5acd5fb77"} Jan 26 11:08:54 crc kubenswrapper[4619]: I0126 11:08:54.689307 4619 generic.go:334] "Generic (PLEG): container finished" podID="70360edb-1325-42c3-9ffd-05d030d21375" containerID="8e4f479d1b1c7dad8667019db9bb6e06e487a9ed0887e3cb7b54973ff15536a5" exitCode=0 Jan 26 11:08:54 crc kubenswrapper[4619]: I0126 11:08:54.689411 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck" event={"ID":"70360edb-1325-42c3-9ffd-05d030d21375","Type":"ContainerDied","Data":"8e4f479d1b1c7dad8667019db9bb6e06e487a9ed0887e3cb7b54973ff15536a5"} Jan 26 11:08:55 crc kubenswrapper[4619]: I0126 11:08:55.948394 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck" Jan 26 11:08:56 crc kubenswrapper[4619]: I0126 11:08:56.063274 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/70360edb-1325-42c3-9ffd-05d030d21375-bundle\") pod \"70360edb-1325-42c3-9ffd-05d030d21375\" (UID: \"70360edb-1325-42c3-9ffd-05d030d21375\") " Jan 26 11:08:56 crc kubenswrapper[4619]: I0126 11:08:56.063663 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/70360edb-1325-42c3-9ffd-05d030d21375-util\") pod \"70360edb-1325-42c3-9ffd-05d030d21375\" (UID: \"70360edb-1325-42c3-9ffd-05d030d21375\") " Jan 26 11:08:56 crc kubenswrapper[4619]: I0126 11:08:56.063729 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx99r\" (UniqueName: \"kubernetes.io/projected/70360edb-1325-42c3-9ffd-05d030d21375-kube-api-access-kx99r\") pod \"70360edb-1325-42c3-9ffd-05d030d21375\" (UID: \"70360edb-1325-42c3-9ffd-05d030d21375\") " Jan 26 11:08:56 crc kubenswrapper[4619]: I0126 11:08:56.064535 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70360edb-1325-42c3-9ffd-05d030d21375-bundle" (OuterVolumeSpecName: "bundle") pod "70360edb-1325-42c3-9ffd-05d030d21375" (UID: "70360edb-1325-42c3-9ffd-05d030d21375"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:08:56 crc kubenswrapper[4619]: I0126 11:08:56.068150 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70360edb-1325-42c3-9ffd-05d030d21375-kube-api-access-kx99r" (OuterVolumeSpecName: "kube-api-access-kx99r") pod "70360edb-1325-42c3-9ffd-05d030d21375" (UID: "70360edb-1325-42c3-9ffd-05d030d21375"). InnerVolumeSpecName "kube-api-access-kx99r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:08:56 crc kubenswrapper[4619]: I0126 11:08:56.076748 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70360edb-1325-42c3-9ffd-05d030d21375-util" (OuterVolumeSpecName: "util") pod "70360edb-1325-42c3-9ffd-05d030d21375" (UID: "70360edb-1325-42c3-9ffd-05d030d21375"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:08:56 crc kubenswrapper[4619]: I0126 11:08:56.165312 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx99r\" (UniqueName: \"kubernetes.io/projected/70360edb-1325-42c3-9ffd-05d030d21375-kube-api-access-kx99r\") on node \"crc\" DevicePath \"\"" Jan 26 11:08:56 crc kubenswrapper[4619]: I0126 11:08:56.165344 4619 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/70360edb-1325-42c3-9ffd-05d030d21375-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:08:56 crc kubenswrapper[4619]: I0126 11:08:56.165356 4619 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/70360edb-1325-42c3-9ffd-05d030d21375-util\") on node \"crc\" DevicePath \"\"" Jan 26 11:08:56 crc kubenswrapper[4619]: I0126 11:08:56.705842 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck" event={"ID":"70360edb-1325-42c3-9ffd-05d030d21375","Type":"ContainerDied","Data":"0657acc0698b9b4ab5de6aefcfdbe5b235a98c6a08e57533157bc5aae83a814e"} Jan 26 11:08:56 crc kubenswrapper[4619]: I0126 11:08:56.705895 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0657acc0698b9b4ab5de6aefcfdbe5b235a98c6a08e57533157bc5aae83a814e" Jan 26 11:08:56 crc kubenswrapper[4619]: I0126 11:08:56.706029 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck" Jan 26 11:09:03 crc kubenswrapper[4619]: I0126 11:09:03.837447 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-b888df747-blvm9"] Jan 26 11:09:03 crc kubenswrapper[4619]: E0126 11:09:03.838275 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70360edb-1325-42c3-9ffd-05d030d21375" containerName="util" Jan 26 11:09:03 crc kubenswrapper[4619]: I0126 11:09:03.838291 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="70360edb-1325-42c3-9ffd-05d030d21375" containerName="util" Jan 26 11:09:03 crc kubenswrapper[4619]: E0126 11:09:03.838302 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70360edb-1325-42c3-9ffd-05d030d21375" containerName="pull" Jan 26 11:09:03 crc kubenswrapper[4619]: I0126 11:09:03.838310 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="70360edb-1325-42c3-9ffd-05d030d21375" containerName="pull" Jan 26 11:09:03 crc kubenswrapper[4619]: E0126 11:09:03.838325 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70360edb-1325-42c3-9ffd-05d030d21375" containerName="extract" Jan 26 11:09:03 crc kubenswrapper[4619]: I0126 11:09:03.838333 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="70360edb-1325-42c3-9ffd-05d030d21375" containerName="extract" Jan 26 11:09:03 crc kubenswrapper[4619]: I0126 11:09:03.838462 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="70360edb-1325-42c3-9ffd-05d030d21375" containerName="extract" Jan 26 11:09:03 crc kubenswrapper[4619]: I0126 11:09:03.839144 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-b888df747-blvm9" Jan 26 11:09:03 crc kubenswrapper[4619]: I0126 11:09:03.846359 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-cjmvj" Jan 26 11:09:03 crc kubenswrapper[4619]: I0126 11:09:03.867169 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td7q7\" (UniqueName: \"kubernetes.io/projected/dd1e6c3c-64b1-4ced-9371-c2368efd4620-kube-api-access-td7q7\") pod \"openstack-operator-controller-init-b888df747-blvm9\" (UID: \"dd1e6c3c-64b1-4ced-9371-c2368efd4620\") " pod="openstack-operators/openstack-operator-controller-init-b888df747-blvm9" Jan 26 11:09:03 crc kubenswrapper[4619]: I0126 11:09:03.876130 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-b888df747-blvm9"] Jan 26 11:09:03 crc kubenswrapper[4619]: I0126 11:09:03.968173 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td7q7\" (UniqueName: \"kubernetes.io/projected/dd1e6c3c-64b1-4ced-9371-c2368efd4620-kube-api-access-td7q7\") pod \"openstack-operator-controller-init-b888df747-blvm9\" (UID: \"dd1e6c3c-64b1-4ced-9371-c2368efd4620\") " pod="openstack-operators/openstack-operator-controller-init-b888df747-blvm9" Jan 26 11:09:03 crc kubenswrapper[4619]: I0126 11:09:03.995647 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td7q7\" (UniqueName: \"kubernetes.io/projected/dd1e6c3c-64b1-4ced-9371-c2368efd4620-kube-api-access-td7q7\") pod \"openstack-operator-controller-init-b888df747-blvm9\" (UID: \"dd1e6c3c-64b1-4ced-9371-c2368efd4620\") " pod="openstack-operators/openstack-operator-controller-init-b888df747-blvm9" Jan 26 11:09:04 crc kubenswrapper[4619]: I0126 11:09:04.156135 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-b888df747-blvm9" Jan 26 11:09:04 crc kubenswrapper[4619]: I0126 11:09:04.458863 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-b888df747-blvm9"] Jan 26 11:09:04 crc kubenswrapper[4619]: I0126 11:09:04.770816 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-b888df747-blvm9" event={"ID":"dd1e6c3c-64b1-4ced-9371-c2368efd4620","Type":"ContainerStarted","Data":"ce1e43bde3b23b997f29282cad37877b05b06e2c089305a691575e769d52ca7f"} Jan 26 11:09:09 crc kubenswrapper[4619]: I0126 11:09:09.815840 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-b888df747-blvm9" event={"ID":"dd1e6c3c-64b1-4ced-9371-c2368efd4620","Type":"ContainerStarted","Data":"a291135aa2a18ed274b1fed444326cab32f6e065d4ba67534f44dd9d31f4568f"} Jan 26 11:09:09 crc kubenswrapper[4619]: I0126 11:09:09.816575 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-b888df747-blvm9" Jan 26 11:09:09 crc kubenswrapper[4619]: I0126 11:09:09.860309 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-b888df747-blvm9" podStartSLOduration=2.204486887 podStartE2EDuration="6.860289789s" podCreationTimestamp="2026-01-26 11:09:03 +0000 UTC" firstStartedPulling="2026-01-26 11:09:04.465277638 +0000 UTC m=+843.499318354" lastFinishedPulling="2026-01-26 11:09:09.12108054 +0000 UTC m=+848.155121256" observedRunningTime="2026-01-26 11:09:09.851149399 +0000 UTC m=+848.885190155" watchObservedRunningTime="2026-01-26 11:09:09.860289789 +0000 UTC m=+848.894330625" Jan 26 11:09:14 crc kubenswrapper[4619]: I0126 11:09:14.159428 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-b888df747-blvm9" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.267568 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-5d6449f6dc-sd74p"] Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.268790 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-5d6449f6dc-sd74p" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.273482 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6w9xz"] Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.274214 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6w9xz" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.275252 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-4bv8c" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.275649 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-t8d8s" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.283105 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-5d6449f6dc-sd74p"] Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.289671 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6w9xz"] Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.292180 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-qvfcm"] Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.292980 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qvfcm" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.297362 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-rwpbt" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.312745 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-qvfcm"] Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.326380 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-x95m2"] Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.327270 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-x95m2" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.330635 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-p5csn" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.348227 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-x95m2"] Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.367365 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-2wgql"] Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.368244 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2wgql" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.373053 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-vv8xz" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.388017 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g6t9k"] Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.388923 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g6t9k" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.391125 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-2wgql"] Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.397045 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-94hjq" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.397640 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmtsh\" (UniqueName: \"kubernetes.io/projected/78e0a81b-7050-4a6b-8f89-b1f02cf2bed4-kube-api-access-lmtsh\") pod \"barbican-operator-controller-manager-5d6449f6dc-sd74p\" (UID: \"78e0a81b-7050-4a6b-8f89-b1f02cf2bed4\") " pod="openstack-operators/barbican-operator-controller-manager-5d6449f6dc-sd74p" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.397695 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcmcs\" (UniqueName: \"kubernetes.io/projected/0236e799-d5fb-4edf-b0cf-b40093e13c9f-kube-api-access-rcmcs\") pod \"cinder-operator-controller-manager-7478f7dbf9-6w9xz\" (UID: \"0236e799-d5fb-4edf-b0cf-b40093e13c9f\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6w9xz" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.397754 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cr2n\" (UniqueName: \"kubernetes.io/projected/0821bfee-e661-4cb0-9079-70ee60bdec02-kube-api-access-5cr2n\") pod \"glance-operator-controller-manager-78fdd796fd-x95m2\" (UID: \"0821bfee-e661-4cb0-9079-70ee60bdec02\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-x95m2" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.397772 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl9zz\" (UniqueName: \"kubernetes.io/projected/c4c33d5c-a111-42bd-932d-7b60aaa798be-kube-api-access-sl9zz\") pod \"designate-operator-controller-manager-b45d7bf98-qvfcm\" (UID: \"c4c33d5c-a111-42bd-932d-7b60aaa798be\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qvfcm" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.419675 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-758868c854-h44rl"] Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.420366 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-758868c854-h44rl" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.426646 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g6t9k"] Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.429401 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.436268 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-p4f4w" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.444216 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-758868c854-h44rl"] Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.470976 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-59hn2"] Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.471691 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-59hn2" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.476480 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-c9wfq" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.499065 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cr2n\" (UniqueName: \"kubernetes.io/projected/0821bfee-e661-4cb0-9079-70ee60bdec02-kube-api-access-5cr2n\") pod \"glance-operator-controller-manager-78fdd796fd-x95m2\" (UID: \"0821bfee-e661-4cb0-9079-70ee60bdec02\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-x95m2" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.499107 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl9zz\" (UniqueName: \"kubernetes.io/projected/c4c33d5c-a111-42bd-932d-7b60aaa798be-kube-api-access-sl9zz\") pod \"designate-operator-controller-manager-b45d7bf98-qvfcm\" (UID: \"c4c33d5c-a111-42bd-932d-7b60aaa798be\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qvfcm" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.499142 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-292rb\" (UniqueName: \"kubernetes.io/projected/67d01b92-a260-4a23-a395-1e2c5079dbed-kube-api-access-292rb\") pod \"heat-operator-controller-manager-594c8c9d5d-2wgql\" (UID: \"67d01b92-a260-4a23-a395-1e2c5079dbed\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2wgql" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.499181 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmtsh\" (UniqueName: \"kubernetes.io/projected/78e0a81b-7050-4a6b-8f89-b1f02cf2bed4-kube-api-access-lmtsh\") pod \"barbican-operator-controller-manager-5d6449f6dc-sd74p\" (UID: \"78e0a81b-7050-4a6b-8f89-b1f02cf2bed4\") " pod="openstack-operators/barbican-operator-controller-manager-5d6449f6dc-sd74p" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.499200 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95jtq\" (UniqueName: \"kubernetes.io/projected/3ada408d-b7d5-4d35-b779-65be4855e174-kube-api-access-95jtq\") pod \"horizon-operator-controller-manager-77d5c5b54f-g6t9k\" (UID: \"3ada408d-b7d5-4d35-b779-65be4855e174\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g6t9k" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.499234 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/817a0b42-6961-46cf-b353-38aee1dab88c-cert\") pod \"infra-operator-controller-manager-758868c854-h44rl\" (UID: \"817a0b42-6961-46cf-b353-38aee1dab88c\") " pod="openstack-operators/infra-operator-controller-manager-758868c854-h44rl" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.499253 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcmcs\" (UniqueName: \"kubernetes.io/projected/0236e799-d5fb-4edf-b0cf-b40093e13c9f-kube-api-access-rcmcs\") pod \"cinder-operator-controller-manager-7478f7dbf9-6w9xz\" (UID: \"0236e799-d5fb-4edf-b0cf-b40093e13c9f\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6w9xz" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.499293 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg9hk\" (UniqueName: \"kubernetes.io/projected/817a0b42-6961-46cf-b353-38aee1dab88c-kube-api-access-rg9hk\") pod \"infra-operator-controller-manager-758868c854-h44rl\" (UID: \"817a0b42-6961-46cf-b353-38aee1dab88c\") " pod="openstack-operators/infra-operator-controller-manager-758868c854-h44rl" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.499788 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-59hn2"] Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.530732 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-pwdwf"] Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.532310 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-ltc6c"] Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.537337 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pwdwf" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.540930 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ltc6c" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.550975 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-xt22r" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.563812 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-bhrmp" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.577277 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl9zz\" (UniqueName: \"kubernetes.io/projected/c4c33d5c-a111-42bd-932d-7b60aaa798be-kube-api-access-sl9zz\") pod \"designate-operator-controller-manager-b45d7bf98-qvfcm\" (UID: \"c4c33d5c-a111-42bd-932d-7b60aaa798be\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qvfcm" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.582446 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcmcs\" (UniqueName: \"kubernetes.io/projected/0236e799-d5fb-4edf-b0cf-b40093e13c9f-kube-api-access-rcmcs\") pod \"cinder-operator-controller-manager-7478f7dbf9-6w9xz\" (UID: \"0236e799-d5fb-4edf-b0cf-b40093e13c9f\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6w9xz" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.583399 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmtsh\" (UniqueName: \"kubernetes.io/projected/78e0a81b-7050-4a6b-8f89-b1f02cf2bed4-kube-api-access-lmtsh\") pod \"barbican-operator-controller-manager-5d6449f6dc-sd74p\" (UID: \"78e0a81b-7050-4a6b-8f89-b1f02cf2bed4\") " pod="openstack-operators/barbican-operator-controller-manager-5d6449f6dc-sd74p" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.594952 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-5d6449f6dc-sd74p" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.601725 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cr2n\" (UniqueName: \"kubernetes.io/projected/0821bfee-e661-4cb0-9079-70ee60bdec02-kube-api-access-5cr2n\") pod \"glance-operator-controller-manager-78fdd796fd-x95m2\" (UID: \"0821bfee-e661-4cb0-9079-70ee60bdec02\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-x95m2" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.601765 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg9hk\" (UniqueName: \"kubernetes.io/projected/817a0b42-6961-46cf-b353-38aee1dab88c-kube-api-access-rg9hk\") pod \"infra-operator-controller-manager-758868c854-h44rl\" (UID: \"817a0b42-6961-46cf-b353-38aee1dab88c\") " pod="openstack-operators/infra-operator-controller-manager-758868c854-h44rl" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.601816 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld27z\" (UniqueName: \"kubernetes.io/projected/9bd38ee3-e401-40e3-8fdc-73722e175d2f-kube-api-access-ld27z\") pod \"keystone-operator-controller-manager-b8b6d4659-pwdwf\" (UID: \"9bd38ee3-e401-40e3-8fdc-73722e175d2f\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pwdwf" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.601843 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvkck\" (UniqueName: \"kubernetes.io/projected/d75eb578-095c-4ad4-b85d-c78417306fb0-kube-api-access-zvkck\") pod \"ironic-operator-controller-manager-598f7747c9-59hn2\" (UID: \"d75eb578-095c-4ad4-b85d-c78417306fb0\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-59hn2" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.601880 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-292rb\" (UniqueName: \"kubernetes.io/projected/67d01b92-a260-4a23-a395-1e2c5079dbed-kube-api-access-292rb\") pod \"heat-operator-controller-manager-594c8c9d5d-2wgql\" (UID: \"67d01b92-a260-4a23-a395-1e2c5079dbed\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2wgql" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.601920 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95jtq\" (UniqueName: \"kubernetes.io/projected/3ada408d-b7d5-4d35-b779-65be4855e174-kube-api-access-95jtq\") pod \"horizon-operator-controller-manager-77d5c5b54f-g6t9k\" (UID: \"3ada408d-b7d5-4d35-b779-65be4855e174\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g6t9k" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.601960 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/817a0b42-6961-46cf-b353-38aee1dab88c-cert\") pod \"infra-operator-controller-manager-758868c854-h44rl\" (UID: \"817a0b42-6961-46cf-b353-38aee1dab88c\") " pod="openstack-operators/infra-operator-controller-manager-758868c854-h44rl" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.602007 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp2t5\" (UniqueName: \"kubernetes.io/projected/097a933b-c278-4367-881a-bbd0942d69b3-kube-api-access-qp2t5\") pod \"manila-operator-controller-manager-78c6999f6f-ltc6c\" (UID: \"097a933b-c278-4367-881a-bbd0942d69b3\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ltc6c" Jan 26 11:09:33 crc kubenswrapper[4619]: E0126 11:09:33.602291 4619 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 11:09:33 crc kubenswrapper[4619]: E0126 11:09:33.602351 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/817a0b42-6961-46cf-b353-38aee1dab88c-cert podName:817a0b42-6961-46cf-b353-38aee1dab88c nodeName:}" failed. No retries permitted until 2026-01-26 11:09:34.102337338 +0000 UTC m=+873.136378054 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/817a0b42-6961-46cf-b353-38aee1dab88c-cert") pod "infra-operator-controller-manager-758868c854-h44rl" (UID: "817a0b42-6961-46cf-b353-38aee1dab88c") : secret "infra-operator-webhook-server-cert" not found Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.617193 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6w9xz" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.629221 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qvfcm" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.668824 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-292rb\" (UniqueName: \"kubernetes.io/projected/67d01b92-a260-4a23-a395-1e2c5079dbed-kube-api-access-292rb\") pod \"heat-operator-controller-manager-594c8c9d5d-2wgql\" (UID: \"67d01b92-a260-4a23-a395-1e2c5079dbed\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2wgql" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.671586 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-x95m2" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.686296 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-pwdwf"] Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.688701 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg9hk\" (UniqueName: \"kubernetes.io/projected/817a0b42-6961-46cf-b353-38aee1dab88c-kube-api-access-rg9hk\") pod \"infra-operator-controller-manager-758868c854-h44rl\" (UID: \"817a0b42-6961-46cf-b353-38aee1dab88c\") " pod="openstack-operators/infra-operator-controller-manager-758868c854-h44rl" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.701055 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-ltc6c"] Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.701337 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2wgql" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.705193 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp2t5\" (UniqueName: \"kubernetes.io/projected/097a933b-c278-4367-881a-bbd0942d69b3-kube-api-access-qp2t5\") pod \"manila-operator-controller-manager-78c6999f6f-ltc6c\" (UID: \"097a933b-c278-4367-881a-bbd0942d69b3\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ltc6c" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.705233 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld27z\" (UniqueName: \"kubernetes.io/projected/9bd38ee3-e401-40e3-8fdc-73722e175d2f-kube-api-access-ld27z\") pod \"keystone-operator-controller-manager-b8b6d4659-pwdwf\" (UID: \"9bd38ee3-e401-40e3-8fdc-73722e175d2f\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pwdwf" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.705256 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvkck\" (UniqueName: \"kubernetes.io/projected/d75eb578-095c-4ad4-b85d-c78417306fb0-kube-api-access-zvkck\") pod \"ironic-operator-controller-manager-598f7747c9-59hn2\" (UID: \"d75eb578-095c-4ad4-b85d-c78417306fb0\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-59hn2" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.710840 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95jtq\" (UniqueName: \"kubernetes.io/projected/3ada408d-b7d5-4d35-b779-65be4855e174-kube-api-access-95jtq\") pod \"horizon-operator-controller-manager-77d5c5b54f-g6t9k\" (UID: \"3ada408d-b7d5-4d35-b779-65be4855e174\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g6t9k" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.740184 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-fzrxh"] Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.741526 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fzrxh" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.763702 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2vtpj"] Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.764680 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2vtpj" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.767923 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-c8mj6"] Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.770093 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-r5gzd" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.771260 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-p29nn" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.780791 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c8mj6" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.801381 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-4rrq4" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.801483 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-fzrxh"] Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.806561 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm554\" (UniqueName: \"kubernetes.io/projected/146ce69f-077f-483b-a7f6-d32bb6e2ad05-kube-api-access-vm554\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-2vtpj\" (UID: \"146ce69f-077f-483b-a7f6-d32bb6e2ad05\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2vtpj" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.806595 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nws2n\" (UniqueName: \"kubernetes.io/projected/d991f0cd-a82d-443e-b399-ab59ac238b0b-kube-api-access-nws2n\") pod \"nova-operator-controller-manager-7bdb645866-fzrxh\" (UID: \"d991f0cd-a82d-443e-b399-ab59ac238b0b\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fzrxh" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.806634 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwbkl\" (UniqueName: \"kubernetes.io/projected/3edab216-d77f-4b95-b98b-0ed86e9b2305-kube-api-access-bwbkl\") pod \"neutron-operator-controller-manager-78d58447c5-c8mj6\" (UID: \"3edab216-d77f-4b95-b98b-0ed86e9b2305\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c8mj6" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.834560 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvkck\" (UniqueName: \"kubernetes.io/projected/d75eb578-095c-4ad4-b85d-c78417306fb0-kube-api-access-zvkck\") pod \"ironic-operator-controller-manager-598f7747c9-59hn2\" (UID: \"d75eb578-095c-4ad4-b85d-c78417306fb0\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-59hn2" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.835628 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld27z\" (UniqueName: \"kubernetes.io/projected/9bd38ee3-e401-40e3-8fdc-73722e175d2f-kube-api-access-ld27z\") pod \"keystone-operator-controller-manager-b8b6d4659-pwdwf\" (UID: \"9bd38ee3-e401-40e3-8fdc-73722e175d2f\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pwdwf" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.835730 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp2t5\" (UniqueName: \"kubernetes.io/projected/097a933b-c278-4367-881a-bbd0942d69b3-kube-api-access-qp2t5\") pod \"manila-operator-controller-manager-78c6999f6f-ltc6c\" (UID: \"097a933b-c278-4367-881a-bbd0942d69b3\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ltc6c" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.840803 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2vtpj"] Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.891200 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-c8mj6"] Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.909359 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm554\" (UniqueName: \"kubernetes.io/projected/146ce69f-077f-483b-a7f6-d32bb6e2ad05-kube-api-access-vm554\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-2vtpj\" (UID: \"146ce69f-077f-483b-a7f6-d32bb6e2ad05\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2vtpj" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.909404 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nws2n\" (UniqueName: \"kubernetes.io/projected/d991f0cd-a82d-443e-b399-ab59ac238b0b-kube-api-access-nws2n\") pod \"nova-operator-controller-manager-7bdb645866-fzrxh\" (UID: \"d991f0cd-a82d-443e-b399-ab59ac238b0b\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fzrxh" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.909430 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwbkl\" (UniqueName: \"kubernetes.io/projected/3edab216-d77f-4b95-b98b-0ed86e9b2305-kube-api-access-bwbkl\") pod \"neutron-operator-controller-manager-78d58447c5-c8mj6\" (UID: \"3edab216-d77f-4b95-b98b-0ed86e9b2305\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c8mj6" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.943060 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm554\" (UniqueName: \"kubernetes.io/projected/146ce69f-077f-483b-a7f6-d32bb6e2ad05-kube-api-access-vm554\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-2vtpj\" (UID: \"146ce69f-077f-483b-a7f6-d32bb6e2ad05\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2vtpj" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.970756 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-vvxv5"] Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.971493 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nws2n\" (UniqueName: \"kubernetes.io/projected/d991f0cd-a82d-443e-b399-ab59ac238b0b-kube-api-access-nws2n\") pod \"nova-operator-controller-manager-7bdb645866-fzrxh\" (UID: \"d991f0cd-a82d-443e-b399-ab59ac238b0b\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fzrxh" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.971638 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-vvxv5" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.979864 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwbkl\" (UniqueName: \"kubernetes.io/projected/3edab216-d77f-4b95-b98b-0ed86e9b2305-kube-api-access-bwbkl\") pod \"neutron-operator-controller-manager-78d58447c5-c8mj6\" (UID: \"3edab216-d77f-4b95-b98b-0ed86e9b2305\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c8mj6" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.992347 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-gn26f" Jan 26 11:09:33 crc kubenswrapper[4619]: I0126 11:09:33.992530 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-vvxv5"] Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.010511 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn2rt\" (UniqueName: \"kubernetes.io/projected/8d9312b1-e850-4099-b5a4-60c113f009a3-kube-api-access-mn2rt\") pod \"octavia-operator-controller-manager-5f4cd88d46-vvxv5\" (UID: \"8d9312b1-e850-4099-b5a4-60c113f009a3\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-vvxv5" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.011039 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g6t9k" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.030254 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-fdtcd"] Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.031053 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-fdtcd" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.033843 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pwdwf" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.040265 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-76bj2" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.056791 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854nj84s"] Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.057596 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854nj84s" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.058253 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ltc6c" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.060008 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.060052 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-6c9k4" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.068180 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854nj84s"] Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.074205 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-rpdwn"] Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.075029 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rpdwn" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.080640 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-khrw6" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.092995 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-59hn2" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.095074 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-jcstk"] Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.095995 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jcstk" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.098697 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-jp69r" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.101344 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-z44mm"] Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.102164 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-z44mm" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.103528 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-qptkr" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.108943 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fzrxh" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.112789 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-rpdwn"] Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.113278 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf64g\" (UniqueName: \"kubernetes.io/projected/a6eb6ada-8607-4687-a235-e8c5f581e4b4-kube-api-access-kf64g\") pod \"ovn-operator-controller-manager-6f75f45d54-fdtcd\" (UID: \"a6eb6ada-8607-4687-a235-e8c5f581e4b4\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-fdtcd" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.113320 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/817a0b42-6961-46cf-b353-38aee1dab88c-cert\") pod \"infra-operator-controller-manager-758868c854-h44rl\" (UID: \"817a0b42-6961-46cf-b353-38aee1dab88c\") " pod="openstack-operators/infra-operator-controller-manager-758868c854-h44rl" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.113349 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn2rt\" (UniqueName: \"kubernetes.io/projected/8d9312b1-e850-4099-b5a4-60c113f009a3-kube-api-access-mn2rt\") pod \"octavia-operator-controller-manager-5f4cd88d46-vvxv5\" (UID: \"8d9312b1-e850-4099-b5a4-60c113f009a3\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-vvxv5" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.113391 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7w92\" (UniqueName: \"kubernetes.io/projected/366e3862-4a5d-447e-890e-1a1ed1d7bf5f-kube-api-access-z7w92\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854nj84s\" (UID: \"366e3862-4a5d-447e-890e-1a1ed1d7bf5f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854nj84s" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.113443 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/366e3862-4a5d-447e-890e-1a1ed1d7bf5f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854nj84s\" (UID: \"366e3862-4a5d-447e-890e-1a1ed1d7bf5f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854nj84s" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.113461 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7kpp\" (UniqueName: \"kubernetes.io/projected/7ca801e8-77b6-4ea2-8bd7-4aec3c0e3c7a-kube-api-access-j7kpp\") pod \"placement-operator-controller-manager-79d5ccc684-rpdwn\" (UID: \"7ca801e8-77b6-4ea2-8bd7-4aec3c0e3c7a\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rpdwn" Jan 26 11:09:34 crc kubenswrapper[4619]: E0126 11:09:34.113639 4619 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 11:09:34 crc kubenswrapper[4619]: E0126 11:09:34.113691 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/817a0b42-6961-46cf-b353-38aee1dab88c-cert podName:817a0b42-6961-46cf-b353-38aee1dab88c nodeName:}" failed. No retries permitted until 2026-01-26 11:09:35.113675326 +0000 UTC m=+874.147716042 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/817a0b42-6961-46cf-b353-38aee1dab88c-cert") pod "infra-operator-controller-manager-758868c854-h44rl" (UID: "817a0b42-6961-46cf-b353-38aee1dab88c") : secret "infra-operator-webhook-server-cert" not found Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.126941 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-jcstk"] Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.135774 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2vtpj" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.149318 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn2rt\" (UniqueName: \"kubernetes.io/projected/8d9312b1-e850-4099-b5a4-60c113f009a3-kube-api-access-mn2rt\") pod \"octavia-operator-controller-manager-5f4cd88d46-vvxv5\" (UID: \"8d9312b1-e850-4099-b5a4-60c113f009a3\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-vvxv5" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.151604 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-z44mm"] Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.172710 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c8mj6" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.172914 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-fdtcd"] Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.193995 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-gpm8h"] Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.202629 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gpm8h" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.206586 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-8wflm" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.211709 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-r479p"] Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.212917 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-r479p" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.214142 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxr22\" (UniqueName: \"kubernetes.io/projected/861bc5f6-bbf8-4626-aed7-a015389630d2-kube-api-access-rxr22\") pod \"telemetry-operator-controller-manager-85cd9769bb-z44mm\" (UID: \"861bc5f6-bbf8-4626-aed7-a015389630d2\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-z44mm" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.214193 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pksls\" (UniqueName: \"kubernetes.io/projected/ad531f03-5ce8-475e-923a-15a9561e79d0-kube-api-access-pksls\") pod \"swift-operator-controller-manager-547cbdb99f-jcstk\" (UID: \"ad531f03-5ce8-475e-923a-15a9561e79d0\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jcstk" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.214216 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7w92\" (UniqueName: \"kubernetes.io/projected/366e3862-4a5d-447e-890e-1a1ed1d7bf5f-kube-api-access-z7w92\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854nj84s\" (UID: \"366e3862-4a5d-447e-890e-1a1ed1d7bf5f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854nj84s" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.214260 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/366e3862-4a5d-447e-890e-1a1ed1d7bf5f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854nj84s\" (UID: \"366e3862-4a5d-447e-890e-1a1ed1d7bf5f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854nj84s" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.214278 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7kpp\" (UniqueName: \"kubernetes.io/projected/7ca801e8-77b6-4ea2-8bd7-4aec3c0e3c7a-kube-api-access-j7kpp\") pod \"placement-operator-controller-manager-79d5ccc684-rpdwn\" (UID: \"7ca801e8-77b6-4ea2-8bd7-4aec3c0e3c7a\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rpdwn" Jan 26 11:09:34 crc kubenswrapper[4619]: E0126 11:09:34.214813 4619 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 11:09:34 crc kubenswrapper[4619]: E0126 11:09:34.214857 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/366e3862-4a5d-447e-890e-1a1ed1d7bf5f-cert podName:366e3862-4a5d-447e-890e-1a1ed1d7bf5f nodeName:}" failed. No retries permitted until 2026-01-26 11:09:34.714842092 +0000 UTC m=+873.748882798 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/366e3862-4a5d-447e-890e-1a1ed1d7bf5f-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854nj84s" (UID: "366e3862-4a5d-447e-890e-1a1ed1d7bf5f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.215006 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf64g\" (UniqueName: \"kubernetes.io/projected/a6eb6ada-8607-4687-a235-e8c5f581e4b4-kube-api-access-kf64g\") pod \"ovn-operator-controller-manager-6f75f45d54-fdtcd\" (UID: \"a6eb6ada-8607-4687-a235-e8c5f581e4b4\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-fdtcd" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.219776 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-wblj5" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.232865 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-gpm8h"] Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.257654 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7w92\" (UniqueName: \"kubernetes.io/projected/366e3862-4a5d-447e-890e-1a1ed1d7bf5f-kube-api-access-z7w92\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854nj84s\" (UID: \"366e3862-4a5d-447e-890e-1a1ed1d7bf5f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854nj84s" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.286328 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-r479p"] Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.293293 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7kpp\" (UniqueName: \"kubernetes.io/projected/7ca801e8-77b6-4ea2-8bd7-4aec3c0e3c7a-kube-api-access-j7kpp\") pod \"placement-operator-controller-manager-79d5ccc684-rpdwn\" (UID: \"7ca801e8-77b6-4ea2-8bd7-4aec3c0e3c7a\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rpdwn" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.310331 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf64g\" (UniqueName: \"kubernetes.io/projected/a6eb6ada-8607-4687-a235-e8c5f581e4b4-kube-api-access-kf64g\") pod \"ovn-operator-controller-manager-6f75f45d54-fdtcd\" (UID: \"a6eb6ada-8607-4687-a235-e8c5f581e4b4\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-fdtcd" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.318286 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxr22\" (UniqueName: \"kubernetes.io/projected/861bc5f6-bbf8-4626-aed7-a015389630d2-kube-api-access-rxr22\") pod \"telemetry-operator-controller-manager-85cd9769bb-z44mm\" (UID: \"861bc5f6-bbf8-4626-aed7-a015389630d2\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-z44mm" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.318392 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pksls\" (UniqueName: \"kubernetes.io/projected/ad531f03-5ce8-475e-923a-15a9561e79d0-kube-api-access-pksls\") pod \"swift-operator-controller-manager-547cbdb99f-jcstk\" (UID: \"ad531f03-5ce8-475e-923a-15a9561e79d0\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jcstk" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.318447 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttkwl\" (UniqueName: \"kubernetes.io/projected/f2d78077-e281-4b95-a576-892bf5eaea8d-kube-api-access-ttkwl\") pod \"test-operator-controller-manager-69797bbcbd-gpm8h\" (UID: \"f2d78077-e281-4b95-a576-892bf5eaea8d\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gpm8h" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.318513 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r98jj\" (UniqueName: \"kubernetes.io/projected/1200fb20-58ac-4e2b-aa47-d8e3bb34578b-kube-api-access-r98jj\") pod \"watcher-operator-controller-manager-564965969-r479p\" (UID: \"1200fb20-58ac-4e2b-aa47-d8e3bb34578b\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-r479p" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.337870 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-vvxv5" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.366547 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pksls\" (UniqueName: \"kubernetes.io/projected/ad531f03-5ce8-475e-923a-15a9561e79d0-kube-api-access-pksls\") pod \"swift-operator-controller-manager-547cbdb99f-jcstk\" (UID: \"ad531f03-5ce8-475e-923a-15a9561e79d0\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jcstk" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.373538 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxr22\" (UniqueName: \"kubernetes.io/projected/861bc5f6-bbf8-4626-aed7-a015389630d2-kube-api-access-rxr22\") pod \"telemetry-operator-controller-manager-85cd9769bb-z44mm\" (UID: \"861bc5f6-bbf8-4626-aed7-a015389630d2\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-z44mm" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.376897 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-fdtcd" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.393058 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-77bff5b64d-pzhsq"] Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.395223 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-77bff5b64d-pzhsq" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.399314 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.399495 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.399765 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-x2khl" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.407167 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-77bff5b64d-pzhsq"] Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.414788 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rpdwn" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.419430 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttkwl\" (UniqueName: \"kubernetes.io/projected/f2d78077-e281-4b95-a576-892bf5eaea8d-kube-api-access-ttkwl\") pod \"test-operator-controller-manager-69797bbcbd-gpm8h\" (UID: \"f2d78077-e281-4b95-a576-892bf5eaea8d\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gpm8h" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.419472 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjshl\" (UniqueName: \"kubernetes.io/projected/3b4348c7-3d25-4d2b-837e-5add3c85cd30-kube-api-access-fjshl\") pod \"openstack-operator-controller-manager-77bff5b64d-pzhsq\" (UID: \"3b4348c7-3d25-4d2b-837e-5add3c85cd30\") " pod="openstack-operators/openstack-operator-controller-manager-77bff5b64d-pzhsq" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.419518 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r98jj\" (UniqueName: \"kubernetes.io/projected/1200fb20-58ac-4e2b-aa47-d8e3bb34578b-kube-api-access-r98jj\") pod \"watcher-operator-controller-manager-564965969-r479p\" (UID: \"1200fb20-58ac-4e2b-aa47-d8e3bb34578b\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-r479p" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.419560 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-metrics-certs\") pod \"openstack-operator-controller-manager-77bff5b64d-pzhsq\" (UID: \"3b4348c7-3d25-4d2b-837e-5add3c85cd30\") " pod="openstack-operators/openstack-operator-controller-manager-77bff5b64d-pzhsq" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.419596 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-webhook-certs\") pod \"openstack-operator-controller-manager-77bff5b64d-pzhsq\" (UID: \"3b4348c7-3d25-4d2b-837e-5add3c85cd30\") " pod="openstack-operators/openstack-operator-controller-manager-77bff5b64d-pzhsq" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.441286 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttkwl\" (UniqueName: \"kubernetes.io/projected/f2d78077-e281-4b95-a576-892bf5eaea8d-kube-api-access-ttkwl\") pod \"test-operator-controller-manager-69797bbcbd-gpm8h\" (UID: \"f2d78077-e281-4b95-a576-892bf5eaea8d\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gpm8h" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.442998 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jcstk" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.443343 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r98jj\" (UniqueName: \"kubernetes.io/projected/1200fb20-58ac-4e2b-aa47-d8e3bb34578b-kube-api-access-r98jj\") pod \"watcher-operator-controller-manager-564965969-r479p\" (UID: \"1200fb20-58ac-4e2b-aa47-d8e3bb34578b\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-r479p" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.466055 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-z44mm" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.522337 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjshl\" (UniqueName: \"kubernetes.io/projected/3b4348c7-3d25-4d2b-837e-5add3c85cd30-kube-api-access-fjshl\") pod \"openstack-operator-controller-manager-77bff5b64d-pzhsq\" (UID: \"3b4348c7-3d25-4d2b-837e-5add3c85cd30\") " pod="openstack-operators/openstack-operator-controller-manager-77bff5b64d-pzhsq" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.522431 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-metrics-certs\") pod \"openstack-operator-controller-manager-77bff5b64d-pzhsq\" (UID: \"3b4348c7-3d25-4d2b-837e-5add3c85cd30\") " pod="openstack-operators/openstack-operator-controller-manager-77bff5b64d-pzhsq" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.522494 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-webhook-certs\") pod \"openstack-operator-controller-manager-77bff5b64d-pzhsq\" (UID: \"3b4348c7-3d25-4d2b-837e-5add3c85cd30\") " pod="openstack-operators/openstack-operator-controller-manager-77bff5b64d-pzhsq" Jan 26 11:09:34 crc kubenswrapper[4619]: E0126 11:09:34.522700 4619 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 11:09:34 crc kubenswrapper[4619]: E0126 11:09:34.522751 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-webhook-certs podName:3b4348c7-3d25-4d2b-837e-5add3c85cd30 nodeName:}" failed. No retries permitted until 2026-01-26 11:09:35.022736148 +0000 UTC m=+874.056776864 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-webhook-certs") pod "openstack-operator-controller-manager-77bff5b64d-pzhsq" (UID: "3b4348c7-3d25-4d2b-837e-5add3c85cd30") : secret "webhook-server-cert" not found Jan 26 11:09:34 crc kubenswrapper[4619]: E0126 11:09:34.523152 4619 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 11:09:34 crc kubenswrapper[4619]: E0126 11:09:34.523228 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-metrics-certs podName:3b4348c7-3d25-4d2b-837e-5add3c85cd30 nodeName:}" failed. No retries permitted until 2026-01-26 11:09:35.023210111 +0000 UTC m=+874.057250827 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-metrics-certs") pod "openstack-operator-controller-manager-77bff5b64d-pzhsq" (UID: "3b4348c7-3d25-4d2b-837e-5add3c85cd30") : secret "metrics-server-cert" not found Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.563607 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjshl\" (UniqueName: \"kubernetes.io/projected/3b4348c7-3d25-4d2b-837e-5add3c85cd30-kube-api-access-fjshl\") pod \"openstack-operator-controller-manager-77bff5b64d-pzhsq\" (UID: \"3b4348c7-3d25-4d2b-837e-5add3c85cd30\") " pod="openstack-operators/openstack-operator-controller-manager-77bff5b64d-pzhsq" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.565953 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gpm8h" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.576133 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j8r8q"] Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.577079 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j8r8q" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.580876 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-xm2z8" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.605257 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j8r8q"] Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.605541 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-r479p" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.623894 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnkdh\" (UniqueName: \"kubernetes.io/projected/9f67fdeb-3415-4da5-a78e-66f6afad477f-kube-api-access-pnkdh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-j8r8q\" (UID: \"9f67fdeb-3415-4da5-a78e-66f6afad477f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j8r8q" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.728588 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/366e3862-4a5d-447e-890e-1a1ed1d7bf5f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854nj84s\" (UID: \"366e3862-4a5d-447e-890e-1a1ed1d7bf5f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854nj84s" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.728987 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnkdh\" (UniqueName: \"kubernetes.io/projected/9f67fdeb-3415-4da5-a78e-66f6afad477f-kube-api-access-pnkdh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-j8r8q\" (UID: \"9f67fdeb-3415-4da5-a78e-66f6afad477f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j8r8q" Jan 26 11:09:34 crc kubenswrapper[4619]: E0126 11:09:34.728812 4619 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 11:09:34 crc kubenswrapper[4619]: E0126 11:09:34.729416 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/366e3862-4a5d-447e-890e-1a1ed1d7bf5f-cert podName:366e3862-4a5d-447e-890e-1a1ed1d7bf5f nodeName:}" failed. No retries permitted until 2026-01-26 11:09:35.729401357 +0000 UTC m=+874.763442073 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/366e3862-4a5d-447e-890e-1a1ed1d7bf5f-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854nj84s" (UID: "366e3862-4a5d-447e-890e-1a1ed1d7bf5f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.765526 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnkdh\" (UniqueName: \"kubernetes.io/projected/9f67fdeb-3415-4da5-a78e-66f6afad477f-kube-api-access-pnkdh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-j8r8q\" (UID: \"9f67fdeb-3415-4da5-a78e-66f6afad477f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j8r8q" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.777995 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j8r8q" Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.909751 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-5d6449f6dc-sd74p"] Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.922522 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6w9xz"] Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.936241 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-qvfcm"] Jan 26 11:09:34 crc kubenswrapper[4619]: I0126 11:09:34.949496 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-2wgql"] Jan 26 11:09:35 crc kubenswrapper[4619]: I0126 11:09:35.039926 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-metrics-certs\") pod \"openstack-operator-controller-manager-77bff5b64d-pzhsq\" (UID: \"3b4348c7-3d25-4d2b-837e-5add3c85cd30\") " pod="openstack-operators/openstack-operator-controller-manager-77bff5b64d-pzhsq" Jan 26 11:09:35 crc kubenswrapper[4619]: I0126 11:09:35.039990 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-webhook-certs\") pod \"openstack-operator-controller-manager-77bff5b64d-pzhsq\" (UID: \"3b4348c7-3d25-4d2b-837e-5add3c85cd30\") " pod="openstack-operators/openstack-operator-controller-manager-77bff5b64d-pzhsq" Jan 26 11:09:35 crc kubenswrapper[4619]: E0126 11:09:35.040147 4619 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 11:09:35 crc kubenswrapper[4619]: E0126 11:09:35.040195 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-webhook-certs podName:3b4348c7-3d25-4d2b-837e-5add3c85cd30 nodeName:}" failed. No retries permitted until 2026-01-26 11:09:36.040180973 +0000 UTC m=+875.074221689 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-webhook-certs") pod "openstack-operator-controller-manager-77bff5b64d-pzhsq" (UID: "3b4348c7-3d25-4d2b-837e-5add3c85cd30") : secret "webhook-server-cert" not found Jan 26 11:09:35 crc kubenswrapper[4619]: E0126 11:09:35.040536 4619 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 11:09:35 crc kubenswrapper[4619]: E0126 11:09:35.040570 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-metrics-certs podName:3b4348c7-3d25-4d2b-837e-5add3c85cd30 nodeName:}" failed. No retries permitted until 2026-01-26 11:09:36.040562183 +0000 UTC m=+875.074602899 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-metrics-certs") pod "openstack-operator-controller-manager-77bff5b64d-pzhsq" (UID: "3b4348c7-3d25-4d2b-837e-5add3c85cd30") : secret "metrics-server-cert" not found Jan 26 11:09:35 crc kubenswrapper[4619]: I0126 11:09:35.144278 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/817a0b42-6961-46cf-b353-38aee1dab88c-cert\") pod \"infra-operator-controller-manager-758868c854-h44rl\" (UID: \"817a0b42-6961-46cf-b353-38aee1dab88c\") " pod="openstack-operators/infra-operator-controller-manager-758868c854-h44rl" Jan 26 11:09:35 crc kubenswrapper[4619]: E0126 11:09:35.144445 4619 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 11:09:35 crc kubenswrapper[4619]: E0126 11:09:35.144493 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/817a0b42-6961-46cf-b353-38aee1dab88c-cert podName:817a0b42-6961-46cf-b353-38aee1dab88c nodeName:}" failed. No retries permitted until 2026-01-26 11:09:37.144478855 +0000 UTC m=+876.178519571 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/817a0b42-6961-46cf-b353-38aee1dab88c-cert") pod "infra-operator-controller-manager-758868c854-h44rl" (UID: "817a0b42-6961-46cf-b353-38aee1dab88c") : secret "infra-operator-webhook-server-cert" not found Jan 26 11:09:35 crc kubenswrapper[4619]: W0126 11:09:35.149718 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67d01b92_a260_4a23_a395_1e2c5079dbed.slice/crio-d0aca3288c0c86b5a7b7c9a5d48ac181e913cacbd636b86e1e76c4e07f9055b0 WatchSource:0}: Error finding container d0aca3288c0c86b5a7b7c9a5d48ac181e913cacbd636b86e1e76c4e07f9055b0: Status 404 returned error can't find the container with id d0aca3288c0c86b5a7b7c9a5d48ac181e913cacbd636b86e1e76c4e07f9055b0 Jan 26 11:09:35 crc kubenswrapper[4619]: I0126 11:09:35.329136 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-ltc6c"] Jan 26 11:09:35 crc kubenswrapper[4619]: I0126 11:09:35.460387 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-x95m2"] Jan 26 11:09:35 crc kubenswrapper[4619]: W0126 11:09:35.476818 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0821bfee_e661_4cb0_9079_70ee60bdec02.slice/crio-bbacae67b770abefdaa712e2da7973179af8e94cc8a4d82e0354ba10c9a25c1e WatchSource:0}: Error finding container bbacae67b770abefdaa712e2da7973179af8e94cc8a4d82e0354ba10c9a25c1e: Status 404 returned error can't find the container with id bbacae67b770abefdaa712e2da7973179af8e94cc8a4d82e0354ba10c9a25c1e Jan 26 11:09:35 crc kubenswrapper[4619]: I0126 11:09:35.481550 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-fzrxh"] Jan 26 11:09:35 crc kubenswrapper[4619]: I0126 11:09:35.490209 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g6t9k"] Jan 26 11:09:35 crc kubenswrapper[4619]: I0126 11:09:35.498864 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-pwdwf"] Jan 26 11:09:35 crc kubenswrapper[4619]: W0126 11:09:35.505703 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bd38ee3_e401_40e3_8fdc_73722e175d2f.slice/crio-434e94d999a37055688aee881d7c51cfa28db8027a1e321aa7739814e092835d WatchSource:0}: Error finding container 434e94d999a37055688aee881d7c51cfa28db8027a1e321aa7739814e092835d: Status 404 returned error can't find the container with id 434e94d999a37055688aee881d7c51cfa28db8027a1e321aa7739814e092835d Jan 26 11:09:35 crc kubenswrapper[4619]: I0126 11:09:35.523084 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-c8mj6"] Jan 26 11:09:35 crc kubenswrapper[4619]: I0126 11:09:35.713575 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-vvxv5"] Jan 26 11:09:35 crc kubenswrapper[4619]: I0126 11:09:35.756316 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/366e3862-4a5d-447e-890e-1a1ed1d7bf5f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854nj84s\" (UID: \"366e3862-4a5d-447e-890e-1a1ed1d7bf5f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854nj84s" Jan 26 11:09:35 crc kubenswrapper[4619]: E0126 11:09:35.756716 4619 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 11:09:35 crc kubenswrapper[4619]: E0126 11:09:35.756775 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/366e3862-4a5d-447e-890e-1a1ed1d7bf5f-cert podName:366e3862-4a5d-447e-890e-1a1ed1d7bf5f nodeName:}" failed. No retries permitted until 2026-01-26 11:09:37.756759811 +0000 UTC m=+876.790800527 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/366e3862-4a5d-447e-890e-1a1ed1d7bf5f-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854nj84s" (UID: "366e3862-4a5d-447e-890e-1a1ed1d7bf5f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 11:09:35 crc kubenswrapper[4619]: I0126 11:09:35.774391 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2vtpj"] Jan 26 11:09:35 crc kubenswrapper[4619]: I0126 11:09:35.800043 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-59hn2"] Jan 26 11:09:35 crc kubenswrapper[4619]: I0126 11:09:35.840090 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-fdtcd"] Jan 26 11:09:35 crc kubenswrapper[4619]: I0126 11:09:35.986713 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-z44mm"] Jan 26 11:09:36 crc kubenswrapper[4619]: I0126 11:09:36.012155 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j8r8q"] Jan 26 11:09:36 crc kubenswrapper[4619]: I0126 11:09:36.026035 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2vtpj" event={"ID":"146ce69f-077f-483b-a7f6-d32bb6e2ad05","Type":"ContainerStarted","Data":"139c30405851da60c0a23d292120486355b16d163ff05b97064fdb50389ef404"} Jan 26 11:09:36 crc kubenswrapper[4619]: I0126 11:09:36.026885 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-rpdwn"] Jan 26 11:09:36 crc kubenswrapper[4619]: E0126 11:09:36.040230 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rxr22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-z44mm_openstack-operators(861bc5f6-bbf8-4626-aed7-a015389630d2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 11:09:36 crc kubenswrapper[4619]: E0126 11:09:36.042119 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-z44mm" podUID="861bc5f6-bbf8-4626-aed7-a015389630d2" Jan 26 11:09:36 crc kubenswrapper[4619]: I0126 11:09:36.044222 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-jcstk"] Jan 26 11:09:36 crc kubenswrapper[4619]: I0126 11:09:36.051379 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-x95m2" event={"ID":"0821bfee-e661-4cb0-9079-70ee60bdec02","Type":"ContainerStarted","Data":"bbacae67b770abefdaa712e2da7973179af8e94cc8a4d82e0354ba10c9a25c1e"} Jan 26 11:09:36 crc kubenswrapper[4619]: E0126 11:09:36.057913 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pksls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-jcstk_openstack-operators(ad531f03-5ce8-475e-923a-15a9561e79d0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 11:09:36 crc kubenswrapper[4619]: E0126 11:09:36.059109 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jcstk" podUID="ad531f03-5ce8-475e-923a-15a9561e79d0" Jan 26 11:09:36 crc kubenswrapper[4619]: I0126 11:09:36.059733 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fzrxh" event={"ID":"d991f0cd-a82d-443e-b399-ab59ac238b0b","Type":"ContainerStarted","Data":"4c8bab6f473cf245b918b0b031fc3ce85de4b4e5e9ed6d232bf2fa35a06b3202"} Jan 26 11:09:36 crc kubenswrapper[4619]: I0126 11:09:36.061456 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-r479p"] Jan 26 11:09:36 crc kubenswrapper[4619]: I0126 11:09:36.065400 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-webhook-certs\") pod \"openstack-operator-controller-manager-77bff5b64d-pzhsq\" (UID: \"3b4348c7-3d25-4d2b-837e-5add3c85cd30\") " pod="openstack-operators/openstack-operator-controller-manager-77bff5b64d-pzhsq" Jan 26 11:09:36 crc kubenswrapper[4619]: I0126 11:09:36.065506 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-metrics-certs\") pod \"openstack-operator-controller-manager-77bff5b64d-pzhsq\" (UID: \"3b4348c7-3d25-4d2b-837e-5add3c85cd30\") " pod="openstack-operators/openstack-operator-controller-manager-77bff5b64d-pzhsq" Jan 26 11:09:36 crc kubenswrapper[4619]: E0126 11:09:36.068486 4619 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 11:09:36 crc kubenswrapper[4619]: E0126 11:09:36.068545 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-metrics-certs podName:3b4348c7-3d25-4d2b-837e-5add3c85cd30 nodeName:}" failed. No retries permitted until 2026-01-26 11:09:38.068528424 +0000 UTC m=+877.102569140 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-metrics-certs") pod "openstack-operator-controller-manager-77bff5b64d-pzhsq" (UID: "3b4348c7-3d25-4d2b-837e-5add3c85cd30") : secret "metrics-server-cert" not found Jan 26 11:09:36 crc kubenswrapper[4619]: E0126 11:09:36.068586 4619 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 11:09:36 crc kubenswrapper[4619]: E0126 11:09:36.068605 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-webhook-certs podName:3b4348c7-3d25-4d2b-837e-5add3c85cd30 nodeName:}" failed. No retries permitted until 2026-01-26 11:09:38.068598926 +0000 UTC m=+877.102639642 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-webhook-certs") pod "openstack-operator-controller-manager-77bff5b64d-pzhsq" (UID: "3b4348c7-3d25-4d2b-837e-5add3c85cd30") : secret "webhook-server-cert" not found Jan 26 11:09:36 crc kubenswrapper[4619]: I0126 11:09:36.068968 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-gpm8h"] Jan 26 11:09:36 crc kubenswrapper[4619]: I0126 11:09:36.083997 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-5d6449f6dc-sd74p" event={"ID":"78e0a81b-7050-4a6b-8f89-b1f02cf2bed4","Type":"ContainerStarted","Data":"0583f78ce9a43c5361fb3add24bf8ab607c0a82b93c4884eff1fc0e149e5a58b"} Jan 26 11:09:36 crc kubenswrapper[4619]: E0126 11:09:36.083998 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j7kpp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-rpdwn_openstack-operators(7ca801e8-77b6-4ea2-8bd7-4aec3c0e3c7a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 11:09:36 crc kubenswrapper[4619]: W0126 11:09:36.086283 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2d78077_e281_4b95_a576_892bf5eaea8d.slice/crio-cf61731bb6a88d234e4c61ffea0d46a4ec1ce3f01f8f21374314716b190eb803 WatchSource:0}: Error finding container cf61731bb6a88d234e4c61ffea0d46a4ec1ce3f01f8f21374314716b190eb803: Status 404 returned error can't find the container with id cf61731bb6a88d234e4c61ffea0d46a4ec1ce3f01f8f21374314716b190eb803 Jan 26 11:09:36 crc kubenswrapper[4619]: E0126 11:09:36.086478 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rpdwn" podUID="7ca801e8-77b6-4ea2-8bd7-4aec3c0e3c7a" Jan 26 11:09:36 crc kubenswrapper[4619]: E0126 11:09:36.087273 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r98jj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-r479p_openstack-operators(1200fb20-58ac-4e2b-aa47-d8e3bb34578b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 11:09:36 crc kubenswrapper[4619]: E0126 11:09:36.088511 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-r479p" podUID="1200fb20-58ac-4e2b-aa47-d8e3bb34578b" Jan 26 11:09:36 crc kubenswrapper[4619]: I0126 11:09:36.093276 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-fdtcd" event={"ID":"a6eb6ada-8607-4687-a235-e8c5f581e4b4","Type":"ContainerStarted","Data":"10ea03bd3077bf0ae2335df097899b949bfd32e0f0c1a4c270e50f2e09890b00"} Jan 26 11:09:36 crc kubenswrapper[4619]: I0126 11:09:36.099923 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-vvxv5" event={"ID":"8d9312b1-e850-4099-b5a4-60c113f009a3","Type":"ContainerStarted","Data":"45eb5a76cbc86a0b5e8c9a3ff39804d82cad91d6995d4982d3f731a36d1c7e18"} Jan 26 11:09:36 crc kubenswrapper[4619]: I0126 11:09:36.106843 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2wgql" event={"ID":"67d01b92-a260-4a23-a395-1e2c5079dbed","Type":"ContainerStarted","Data":"d0aca3288c0c86b5a7b7c9a5d48ac181e913cacbd636b86e1e76c4e07f9055b0"} Jan 26 11:09:36 crc kubenswrapper[4619]: I0126 11:09:36.112235 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6w9xz" event={"ID":"0236e799-d5fb-4edf-b0cf-b40093e13c9f","Type":"ContainerStarted","Data":"dd006dd15c107f79ef3dc626ec61d130a38a74a4a0cbb2b1ff83aa42879e2f3e"} Jan 26 11:09:36 crc kubenswrapper[4619]: I0126 11:09:36.113521 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qvfcm" event={"ID":"c4c33d5c-a111-42bd-932d-7b60aaa798be","Type":"ContainerStarted","Data":"9acc14f33013a60beb5d951d46ced9d5cfc61d17fe6b347ab8334599a50f041c"} Jan 26 11:09:36 crc kubenswrapper[4619]: I0126 11:09:36.114964 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g6t9k" event={"ID":"3ada408d-b7d5-4d35-b779-65be4855e174","Type":"ContainerStarted","Data":"9a8ee9fafe1b04dc0ea56512273ad1f36e9d7caf32d47ada75c333fe97166fcc"} Jan 26 11:09:36 crc kubenswrapper[4619]: I0126 11:09:36.115823 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c8mj6" event={"ID":"3edab216-d77f-4b95-b98b-0ed86e9b2305","Type":"ContainerStarted","Data":"be4a0cdfd698c532cc93791404cf75e402e2df9b60bb48420f18ba02fec7f94b"} Jan 26 11:09:36 crc kubenswrapper[4619]: I0126 11:09:36.117944 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pwdwf" event={"ID":"9bd38ee3-e401-40e3-8fdc-73722e175d2f","Type":"ContainerStarted","Data":"434e94d999a37055688aee881d7c51cfa28db8027a1e321aa7739814e092835d"} Jan 26 11:09:36 crc kubenswrapper[4619]: I0126 11:09:36.119509 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ltc6c" event={"ID":"097a933b-c278-4367-881a-bbd0942d69b3","Type":"ContainerStarted","Data":"d7200718a62e5d02c2b74adc4a29ccd445a2475252fa93a8bfb4943484aa8c0f"} Jan 26 11:09:36 crc kubenswrapper[4619]: E0126 11:09:36.120415 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ttkwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-gpm8h_openstack-operators(f2d78077-e281-4b95-a576-892bf5eaea8d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 11:09:36 crc kubenswrapper[4619]: I0126 11:09:36.120713 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-59hn2" event={"ID":"d75eb578-095c-4ad4-b85d-c78417306fb0","Type":"ContainerStarted","Data":"a64c76d9743a4c93520a2c78b035a0931db45a735cb7177029d4ff16df4b4c56"} Jan 26 11:09:36 crc kubenswrapper[4619]: E0126 11:09:36.121942 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gpm8h" podUID="f2d78077-e281-4b95-a576-892bf5eaea8d" Jan 26 11:09:37 crc kubenswrapper[4619]: I0126 11:09:37.148957 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rpdwn" event={"ID":"7ca801e8-77b6-4ea2-8bd7-4aec3c0e3c7a","Type":"ContainerStarted","Data":"e9453087a7643fa25e4a84cb9f910f127a417b5684e3728999785f260df22990"} Jan 26 11:09:37 crc kubenswrapper[4619]: I0126 11:09:37.151780 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-z44mm" event={"ID":"861bc5f6-bbf8-4626-aed7-a015389630d2","Type":"ContainerStarted","Data":"4b6a4bd8909138ecd931ed7db2b85f24f4e4d482962eb1cbf533e75b6dc8714a"} Jan 26 11:09:37 crc kubenswrapper[4619]: I0126 11:09:37.154350 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jcstk" event={"ID":"ad531f03-5ce8-475e-923a-15a9561e79d0","Type":"ContainerStarted","Data":"35438577c73c1b2ebbba5bc1ed635f62228bb48d1edd6c85f8e7e23a28f60ea3"} Jan 26 11:09:37 crc kubenswrapper[4619]: E0126 11:09:37.154772 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rpdwn" podUID="7ca801e8-77b6-4ea2-8bd7-4aec3c0e3c7a" Jan 26 11:09:37 crc kubenswrapper[4619]: E0126 11:09:37.155173 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jcstk" podUID="ad531f03-5ce8-475e-923a-15a9561e79d0" Jan 26 11:09:37 crc kubenswrapper[4619]: I0126 11:09:37.157222 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j8r8q" event={"ID":"9f67fdeb-3415-4da5-a78e-66f6afad477f","Type":"ContainerStarted","Data":"ac9cfc3646b0adfaf36b2531670ae0763de225e86c7b1250b2a1b51188ed2142"} Jan 26 11:09:37 crc kubenswrapper[4619]: I0126 11:09:37.162152 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-r479p" event={"ID":"1200fb20-58ac-4e2b-aa47-d8e3bb34578b","Type":"ContainerStarted","Data":"06a5244070b9b810cc70f1b0a18250d9109adbca14c3d45515fa2a8e4f9ac5e6"} Jan 26 11:09:37 crc kubenswrapper[4619]: E0126 11:09:37.163375 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-r479p" podUID="1200fb20-58ac-4e2b-aa47-d8e3bb34578b" Jan 26 11:09:37 crc kubenswrapper[4619]: E0126 11:09:37.170064 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-z44mm" podUID="861bc5f6-bbf8-4626-aed7-a015389630d2" Jan 26 11:09:37 crc kubenswrapper[4619]: I0126 11:09:37.209736 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gpm8h" event={"ID":"f2d78077-e281-4b95-a576-892bf5eaea8d","Type":"ContainerStarted","Data":"cf61731bb6a88d234e4c61ffea0d46a4ec1ce3f01f8f21374314716b190eb803"} Jan 26 11:09:37 crc kubenswrapper[4619]: E0126 11:09:37.212744 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gpm8h" podUID="f2d78077-e281-4b95-a576-892bf5eaea8d" Jan 26 11:09:37 crc kubenswrapper[4619]: I0126 11:09:37.213578 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/817a0b42-6961-46cf-b353-38aee1dab88c-cert\") pod \"infra-operator-controller-manager-758868c854-h44rl\" (UID: \"817a0b42-6961-46cf-b353-38aee1dab88c\") " pod="openstack-operators/infra-operator-controller-manager-758868c854-h44rl" Jan 26 11:09:37 crc kubenswrapper[4619]: E0126 11:09:37.213693 4619 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 11:09:37 crc kubenswrapper[4619]: E0126 11:09:37.213731 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/817a0b42-6961-46cf-b353-38aee1dab88c-cert podName:817a0b42-6961-46cf-b353-38aee1dab88c nodeName:}" failed. No retries permitted until 2026-01-26 11:09:41.213719479 +0000 UTC m=+880.247760195 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/817a0b42-6961-46cf-b353-38aee1dab88c-cert") pod "infra-operator-controller-manager-758868c854-h44rl" (UID: "817a0b42-6961-46cf-b353-38aee1dab88c") : secret "infra-operator-webhook-server-cert" not found Jan 26 11:09:37 crc kubenswrapper[4619]: I0126 11:09:37.821114 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/366e3862-4a5d-447e-890e-1a1ed1d7bf5f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854nj84s\" (UID: \"366e3862-4a5d-447e-890e-1a1ed1d7bf5f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854nj84s" Jan 26 11:09:37 crc kubenswrapper[4619]: E0126 11:09:37.821285 4619 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 11:09:37 crc kubenswrapper[4619]: E0126 11:09:37.821373 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/366e3862-4a5d-447e-890e-1a1ed1d7bf5f-cert podName:366e3862-4a5d-447e-890e-1a1ed1d7bf5f nodeName:}" failed. No retries permitted until 2026-01-26 11:09:41.821355069 +0000 UTC m=+880.855395785 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/366e3862-4a5d-447e-890e-1a1ed1d7bf5f-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854nj84s" (UID: "366e3862-4a5d-447e-890e-1a1ed1d7bf5f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 11:09:38 crc kubenswrapper[4619]: I0126 11:09:38.124996 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-metrics-certs\") pod \"openstack-operator-controller-manager-77bff5b64d-pzhsq\" (UID: \"3b4348c7-3d25-4d2b-837e-5add3c85cd30\") " pod="openstack-operators/openstack-operator-controller-manager-77bff5b64d-pzhsq" Jan 26 11:09:38 crc kubenswrapper[4619]: I0126 11:09:38.125060 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-webhook-certs\") pod \"openstack-operator-controller-manager-77bff5b64d-pzhsq\" (UID: \"3b4348c7-3d25-4d2b-837e-5add3c85cd30\") " pod="openstack-operators/openstack-operator-controller-manager-77bff5b64d-pzhsq" Jan 26 11:09:38 crc kubenswrapper[4619]: E0126 11:09:38.125198 4619 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 11:09:38 crc kubenswrapper[4619]: E0126 11:09:38.125244 4619 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 11:09:38 crc kubenswrapper[4619]: E0126 11:09:38.125285 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-metrics-certs podName:3b4348c7-3d25-4d2b-837e-5add3c85cd30 nodeName:}" failed. No retries permitted until 2026-01-26 11:09:42.125265896 +0000 UTC m=+881.159306832 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-metrics-certs") pod "openstack-operator-controller-manager-77bff5b64d-pzhsq" (UID: "3b4348c7-3d25-4d2b-837e-5add3c85cd30") : secret "metrics-server-cert" not found Jan 26 11:09:38 crc kubenswrapper[4619]: E0126 11:09:38.125308 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-webhook-certs podName:3b4348c7-3d25-4d2b-837e-5add3c85cd30 nodeName:}" failed. No retries permitted until 2026-01-26 11:09:42.125298228 +0000 UTC m=+881.159339174 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-webhook-certs") pod "openstack-operator-controller-manager-77bff5b64d-pzhsq" (UID: "3b4348c7-3d25-4d2b-837e-5add3c85cd30") : secret "webhook-server-cert" not found Jan 26 11:09:38 crc kubenswrapper[4619]: E0126 11:09:38.232776 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-z44mm" podUID="861bc5f6-bbf8-4626-aed7-a015389630d2" Jan 26 11:09:38 crc kubenswrapper[4619]: E0126 11:09:38.233093 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rpdwn" podUID="7ca801e8-77b6-4ea2-8bd7-4aec3c0e3c7a" Jan 26 11:09:38 crc kubenswrapper[4619]: E0126 11:09:38.238340 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-r479p" podUID="1200fb20-58ac-4e2b-aa47-d8e3bb34578b" Jan 26 11:09:38 crc kubenswrapper[4619]: E0126 11:09:38.238413 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gpm8h" podUID="f2d78077-e281-4b95-a576-892bf5eaea8d" Jan 26 11:09:38 crc kubenswrapper[4619]: E0126 11:09:38.241422 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jcstk" podUID="ad531f03-5ce8-475e-923a-15a9561e79d0" Jan 26 11:09:41 crc kubenswrapper[4619]: I0126 11:09:41.296891 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/817a0b42-6961-46cf-b353-38aee1dab88c-cert\") pod \"infra-operator-controller-manager-758868c854-h44rl\" (UID: \"817a0b42-6961-46cf-b353-38aee1dab88c\") " pod="openstack-operators/infra-operator-controller-manager-758868c854-h44rl" Jan 26 11:09:41 crc kubenswrapper[4619]: E0126 11:09:41.297099 4619 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 11:09:41 crc kubenswrapper[4619]: E0126 11:09:41.297804 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/817a0b42-6961-46cf-b353-38aee1dab88c-cert podName:817a0b42-6961-46cf-b353-38aee1dab88c nodeName:}" failed. No retries permitted until 2026-01-26 11:09:49.297778758 +0000 UTC m=+888.331819474 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/817a0b42-6961-46cf-b353-38aee1dab88c-cert") pod "infra-operator-controller-manager-758868c854-h44rl" (UID: "817a0b42-6961-46cf-b353-38aee1dab88c") : secret "infra-operator-webhook-server-cert" not found Jan 26 11:09:41 crc kubenswrapper[4619]: I0126 11:09:41.913822 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/366e3862-4a5d-447e-890e-1a1ed1d7bf5f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854nj84s\" (UID: \"366e3862-4a5d-447e-890e-1a1ed1d7bf5f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854nj84s" Jan 26 11:09:41 crc kubenswrapper[4619]: E0126 11:09:41.914041 4619 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 11:09:41 crc kubenswrapper[4619]: E0126 11:09:41.914125 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/366e3862-4a5d-447e-890e-1a1ed1d7bf5f-cert podName:366e3862-4a5d-447e-890e-1a1ed1d7bf5f nodeName:}" failed. No retries permitted until 2026-01-26 11:09:49.914105077 +0000 UTC m=+888.948145793 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/366e3862-4a5d-447e-890e-1a1ed1d7bf5f-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854nj84s" (UID: "366e3862-4a5d-447e-890e-1a1ed1d7bf5f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 11:09:42 crc kubenswrapper[4619]: I0126 11:09:42.223020 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-metrics-certs\") pod \"openstack-operator-controller-manager-77bff5b64d-pzhsq\" (UID: \"3b4348c7-3d25-4d2b-837e-5add3c85cd30\") " pod="openstack-operators/openstack-operator-controller-manager-77bff5b64d-pzhsq" Jan 26 11:09:42 crc kubenswrapper[4619]: I0126 11:09:42.223109 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-webhook-certs\") pod \"openstack-operator-controller-manager-77bff5b64d-pzhsq\" (UID: \"3b4348c7-3d25-4d2b-837e-5add3c85cd30\") " pod="openstack-operators/openstack-operator-controller-manager-77bff5b64d-pzhsq" Jan 26 11:09:42 crc kubenswrapper[4619]: E0126 11:09:42.223258 4619 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 11:09:42 crc kubenswrapper[4619]: E0126 11:09:42.223318 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-webhook-certs podName:3b4348c7-3d25-4d2b-837e-5add3c85cd30 nodeName:}" failed. No retries permitted until 2026-01-26 11:09:50.223303188 +0000 UTC m=+889.257343904 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-webhook-certs") pod "openstack-operator-controller-manager-77bff5b64d-pzhsq" (UID: "3b4348c7-3d25-4d2b-837e-5add3c85cd30") : secret "webhook-server-cert" not found Jan 26 11:09:42 crc kubenswrapper[4619]: E0126 11:09:42.223363 4619 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 11:09:42 crc kubenswrapper[4619]: E0126 11:09:42.223382 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-metrics-certs podName:3b4348c7-3d25-4d2b-837e-5add3c85cd30 nodeName:}" failed. No retries permitted until 2026-01-26 11:09:50.22337533 +0000 UTC m=+889.257416046 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-metrics-certs") pod "openstack-operator-controller-manager-77bff5b64d-pzhsq" (UID: "3b4348c7-3d25-4d2b-837e-5add3c85cd30") : secret "metrics-server-cert" not found Jan 26 11:09:49 crc kubenswrapper[4619]: I0126 11:09:49.345246 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/817a0b42-6961-46cf-b353-38aee1dab88c-cert\") pod \"infra-operator-controller-manager-758868c854-h44rl\" (UID: \"817a0b42-6961-46cf-b353-38aee1dab88c\") " pod="openstack-operators/infra-operator-controller-manager-758868c854-h44rl" Jan 26 11:09:49 crc kubenswrapper[4619]: I0126 11:09:49.354547 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/817a0b42-6961-46cf-b353-38aee1dab88c-cert\") pod \"infra-operator-controller-manager-758868c854-h44rl\" (UID: \"817a0b42-6961-46cf-b353-38aee1dab88c\") " pod="openstack-operators/infra-operator-controller-manager-758868c854-h44rl" Jan 26 11:09:49 crc kubenswrapper[4619]: E0126 11:09:49.652238 4619 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e" Jan 26 11:09:49 crc kubenswrapper[4619]: E0126 11:09:49.652775 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bwbkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78d58447c5-c8mj6_openstack-operators(3edab216-d77f-4b95-b98b-0ed86e9b2305): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:09:49 crc kubenswrapper[4619]: I0126 11:09:49.652925 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-758868c854-h44rl" Jan 26 11:09:49 crc kubenswrapper[4619]: E0126 11:09:49.654398 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c8mj6" podUID="3edab216-d77f-4b95-b98b-0ed86e9b2305" Jan 26 11:09:49 crc kubenswrapper[4619]: I0126 11:09:49.955108 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/366e3862-4a5d-447e-890e-1a1ed1d7bf5f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854nj84s\" (UID: \"366e3862-4a5d-447e-890e-1a1ed1d7bf5f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854nj84s" Jan 26 11:09:49 crc kubenswrapper[4619]: E0126 11:09:49.955367 4619 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 11:09:49 crc kubenswrapper[4619]: E0126 11:09:49.955447 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/366e3862-4a5d-447e-890e-1a1ed1d7bf5f-cert podName:366e3862-4a5d-447e-890e-1a1ed1d7bf5f nodeName:}" failed. No retries permitted until 2026-01-26 11:10:05.955424395 +0000 UTC m=+904.989465121 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/366e3862-4a5d-447e-890e-1a1ed1d7bf5f-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854nj84s" (UID: "366e3862-4a5d-447e-890e-1a1ed1d7bf5f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 11:09:50 crc kubenswrapper[4619]: I0126 11:09:50.259763 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-webhook-certs\") pod \"openstack-operator-controller-manager-77bff5b64d-pzhsq\" (UID: \"3b4348c7-3d25-4d2b-837e-5add3c85cd30\") " pod="openstack-operators/openstack-operator-controller-manager-77bff5b64d-pzhsq" Jan 26 11:09:50 crc kubenswrapper[4619]: I0126 11:09:50.259882 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-metrics-certs\") pod \"openstack-operator-controller-manager-77bff5b64d-pzhsq\" (UID: \"3b4348c7-3d25-4d2b-837e-5add3c85cd30\") " pod="openstack-operators/openstack-operator-controller-manager-77bff5b64d-pzhsq" Jan 26 11:09:50 crc kubenswrapper[4619]: E0126 11:09:50.260001 4619 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 11:09:50 crc kubenswrapper[4619]: E0126 11:09:50.260049 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-metrics-certs podName:3b4348c7-3d25-4d2b-837e-5add3c85cd30 nodeName:}" failed. No retries permitted until 2026-01-26 11:10:06.260034261 +0000 UTC m=+905.294074977 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-metrics-certs") pod "openstack-operator-controller-manager-77bff5b64d-pzhsq" (UID: "3b4348c7-3d25-4d2b-837e-5add3c85cd30") : secret "metrics-server-cert" not found Jan 26 11:09:50 crc kubenswrapper[4619]: E0126 11:09:50.260053 4619 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 11:09:50 crc kubenswrapper[4619]: E0126 11:09:50.260098 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-webhook-certs podName:3b4348c7-3d25-4d2b-837e-5add3c85cd30 nodeName:}" failed. No retries permitted until 2026-01-26 11:10:06.260086012 +0000 UTC m=+905.294126738 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-webhook-certs") pod "openstack-operator-controller-manager-77bff5b64d-pzhsq" (UID: "3b4348c7-3d25-4d2b-837e-5add3c85cd30") : secret "webhook-server-cert" not found Jan 26 11:09:50 crc kubenswrapper[4619]: I0126 11:09:50.298780 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4kqf4"] Jan 26 11:09:50 crc kubenswrapper[4619]: I0126 11:09:50.300789 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kqf4" Jan 26 11:09:50 crc kubenswrapper[4619]: I0126 11:09:50.308755 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kqf4"] Jan 26 11:09:50 crc kubenswrapper[4619]: E0126 11:09:50.333285 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c8mj6" podUID="3edab216-d77f-4b95-b98b-0ed86e9b2305" Jan 26 11:09:50 crc kubenswrapper[4619]: I0126 11:09:50.360851 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2212929e-05ed-45d7-a9b8-2c9c15fc5ca0-catalog-content\") pod \"redhat-marketplace-4kqf4\" (UID: \"2212929e-05ed-45d7-a9b8-2c9c15fc5ca0\") " pod="openshift-marketplace/redhat-marketplace-4kqf4" Jan 26 11:09:50 crc kubenswrapper[4619]: I0126 11:09:50.361050 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j27n6\" (UniqueName: \"kubernetes.io/projected/2212929e-05ed-45d7-a9b8-2c9c15fc5ca0-kube-api-access-j27n6\") pod \"redhat-marketplace-4kqf4\" (UID: \"2212929e-05ed-45d7-a9b8-2c9c15fc5ca0\") " pod="openshift-marketplace/redhat-marketplace-4kqf4" Jan 26 11:09:50 crc kubenswrapper[4619]: I0126 11:09:50.361137 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2212929e-05ed-45d7-a9b8-2c9c15fc5ca0-utilities\") pod \"redhat-marketplace-4kqf4\" (UID: \"2212929e-05ed-45d7-a9b8-2c9c15fc5ca0\") " pod="openshift-marketplace/redhat-marketplace-4kqf4" Jan 26 11:09:50 crc kubenswrapper[4619]: I0126 11:09:50.462429 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2212929e-05ed-45d7-a9b8-2c9c15fc5ca0-catalog-content\") pod \"redhat-marketplace-4kqf4\" (UID: \"2212929e-05ed-45d7-a9b8-2c9c15fc5ca0\") " pod="openshift-marketplace/redhat-marketplace-4kqf4" Jan 26 11:09:50 crc kubenswrapper[4619]: I0126 11:09:50.462557 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j27n6\" (UniqueName: \"kubernetes.io/projected/2212929e-05ed-45d7-a9b8-2c9c15fc5ca0-kube-api-access-j27n6\") pod \"redhat-marketplace-4kqf4\" (UID: \"2212929e-05ed-45d7-a9b8-2c9c15fc5ca0\") " pod="openshift-marketplace/redhat-marketplace-4kqf4" Jan 26 11:09:50 crc kubenswrapper[4619]: I0126 11:09:50.462596 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2212929e-05ed-45d7-a9b8-2c9c15fc5ca0-utilities\") pod \"redhat-marketplace-4kqf4\" (UID: \"2212929e-05ed-45d7-a9b8-2c9c15fc5ca0\") " pod="openshift-marketplace/redhat-marketplace-4kqf4" Jan 26 11:09:50 crc kubenswrapper[4619]: I0126 11:09:50.463216 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2212929e-05ed-45d7-a9b8-2c9c15fc5ca0-catalog-content\") pod \"redhat-marketplace-4kqf4\" (UID: \"2212929e-05ed-45d7-a9b8-2c9c15fc5ca0\") " pod="openshift-marketplace/redhat-marketplace-4kqf4" Jan 26 11:09:50 crc kubenswrapper[4619]: I0126 11:09:50.463239 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2212929e-05ed-45d7-a9b8-2c9c15fc5ca0-utilities\") pod \"redhat-marketplace-4kqf4\" (UID: \"2212929e-05ed-45d7-a9b8-2c9c15fc5ca0\") " pod="openshift-marketplace/redhat-marketplace-4kqf4" Jan 26 11:09:50 crc kubenswrapper[4619]: I0126 11:09:50.514091 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j27n6\" (UniqueName: \"kubernetes.io/projected/2212929e-05ed-45d7-a9b8-2c9c15fc5ca0-kube-api-access-j27n6\") pod \"redhat-marketplace-4kqf4\" (UID: \"2212929e-05ed-45d7-a9b8-2c9c15fc5ca0\") " pod="openshift-marketplace/redhat-marketplace-4kqf4" Jan 26 11:09:50 crc kubenswrapper[4619]: I0126 11:09:50.624568 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kqf4" Jan 26 11:09:51 crc kubenswrapper[4619]: E0126 11:09:51.269667 4619 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 26 11:09:51 crc kubenswrapper[4619]: E0126 11:09:51.270194 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-292rb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-2wgql_openstack-operators(67d01b92-a260-4a23-a395-1e2c5079dbed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:09:51 crc kubenswrapper[4619]: E0126 11:09:51.271432 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2wgql" podUID="67d01b92-a260-4a23-a395-1e2c5079dbed" Jan 26 11:09:51 crc kubenswrapper[4619]: E0126 11:09:51.338173 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2wgql" podUID="67d01b92-a260-4a23-a395-1e2c5079dbed" Jan 26 11:09:51 crc kubenswrapper[4619]: E0126 11:09:51.934016 4619 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 26 11:09:51 crc kubenswrapper[4619]: E0126 11:09:51.934220 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qp2t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-ltc6c_openstack-operators(097a933b-c278-4367-881a-bbd0942d69b3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:09:51 crc kubenswrapper[4619]: E0126 11:09:51.935395 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ltc6c" podUID="097a933b-c278-4367-881a-bbd0942d69b3" Jan 26 11:09:52 crc kubenswrapper[4619]: E0126 11:09:52.344314 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ltc6c" podUID="097a933b-c278-4367-881a-bbd0942d69b3" Jan 26 11:09:52 crc kubenswrapper[4619]: E0126 11:09:52.593684 4619 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece" Jan 26 11:09:52 crc kubenswrapper[4619]: E0126 11:09:52.594062 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sl9zz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-b45d7bf98-qvfcm_openstack-operators(c4c33d5c-a111-42bd-932d-7b60aaa798be): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:09:52 crc kubenswrapper[4619]: E0126 11:09:52.597445 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qvfcm" podUID="c4c33d5c-a111-42bd-932d-7b60aaa798be" Jan 26 11:09:53 crc kubenswrapper[4619]: E0126 11:09:53.203665 4619 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84" Jan 26 11:09:53 crc kubenswrapper[4619]: E0126 11:09:53.203880 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vm554,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-2vtpj_openstack-operators(146ce69f-077f-483b-a7f6-d32bb6e2ad05): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:09:53 crc kubenswrapper[4619]: E0126 11:09:53.205056 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2vtpj" podUID="146ce69f-077f-483b-a7f6-d32bb6e2ad05" Jan 26 11:09:53 crc kubenswrapper[4619]: E0126 11:09:53.352740 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece\\\"\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qvfcm" podUID="c4c33d5c-a111-42bd-932d-7b60aaa798be" Jan 26 11:09:53 crc kubenswrapper[4619]: E0126 11:09:53.352882 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2vtpj" podUID="146ce69f-077f-483b-a7f6-d32bb6e2ad05" Jan 26 11:09:57 crc kubenswrapper[4619]: I0126 11:09:57.325151 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zhptm"] Jan 26 11:09:57 crc kubenswrapper[4619]: I0126 11:09:57.327205 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zhptm" Jan 26 11:09:57 crc kubenswrapper[4619]: I0126 11:09:57.356777 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zhptm"] Jan 26 11:09:57 crc kubenswrapper[4619]: I0126 11:09:57.374138 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d179e424-352f-4f5b-afd3-c68b8e79c096-catalog-content\") pod \"community-operators-zhptm\" (UID: \"d179e424-352f-4f5b-afd3-c68b8e79c096\") " pod="openshift-marketplace/community-operators-zhptm" Jan 26 11:09:57 crc kubenswrapper[4619]: I0126 11:09:57.374485 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mnl2\" (UniqueName: \"kubernetes.io/projected/d179e424-352f-4f5b-afd3-c68b8e79c096-kube-api-access-4mnl2\") pod \"community-operators-zhptm\" (UID: \"d179e424-352f-4f5b-afd3-c68b8e79c096\") " pod="openshift-marketplace/community-operators-zhptm" Jan 26 11:09:57 crc kubenswrapper[4619]: I0126 11:09:57.374683 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d179e424-352f-4f5b-afd3-c68b8e79c096-utilities\") pod \"community-operators-zhptm\" (UID: \"d179e424-352f-4f5b-afd3-c68b8e79c096\") " pod="openshift-marketplace/community-operators-zhptm" Jan 26 11:09:57 crc kubenswrapper[4619]: I0126 11:09:57.477110 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mnl2\" (UniqueName: \"kubernetes.io/projected/d179e424-352f-4f5b-afd3-c68b8e79c096-kube-api-access-4mnl2\") pod \"community-operators-zhptm\" (UID: \"d179e424-352f-4f5b-afd3-c68b8e79c096\") " pod="openshift-marketplace/community-operators-zhptm" Jan 26 11:09:57 crc kubenswrapper[4619]: I0126 11:09:57.477206 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d179e424-352f-4f5b-afd3-c68b8e79c096-utilities\") pod \"community-operators-zhptm\" (UID: \"d179e424-352f-4f5b-afd3-c68b8e79c096\") " pod="openshift-marketplace/community-operators-zhptm" Jan 26 11:09:57 crc kubenswrapper[4619]: I0126 11:09:57.477302 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d179e424-352f-4f5b-afd3-c68b8e79c096-catalog-content\") pod \"community-operators-zhptm\" (UID: \"d179e424-352f-4f5b-afd3-c68b8e79c096\") " pod="openshift-marketplace/community-operators-zhptm" Jan 26 11:09:57 crc kubenswrapper[4619]: I0126 11:09:57.477873 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d179e424-352f-4f5b-afd3-c68b8e79c096-catalog-content\") pod \"community-operators-zhptm\" (UID: \"d179e424-352f-4f5b-afd3-c68b8e79c096\") " pod="openshift-marketplace/community-operators-zhptm" Jan 26 11:09:57 crc kubenswrapper[4619]: I0126 11:09:57.478024 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d179e424-352f-4f5b-afd3-c68b8e79c096-utilities\") pod \"community-operators-zhptm\" (UID: \"d179e424-352f-4f5b-afd3-c68b8e79c096\") " pod="openshift-marketplace/community-operators-zhptm" Jan 26 11:09:57 crc kubenswrapper[4619]: I0126 11:09:57.502551 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mnl2\" (UniqueName: \"kubernetes.io/projected/d179e424-352f-4f5b-afd3-c68b8e79c096-kube-api-access-4mnl2\") pod \"community-operators-zhptm\" (UID: \"d179e424-352f-4f5b-afd3-c68b8e79c096\") " pod="openshift-marketplace/community-operators-zhptm" Jan 26 11:09:57 crc kubenswrapper[4619]: I0126 11:09:57.655558 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zhptm" Jan 26 11:09:58 crc kubenswrapper[4619]: E0126 11:09:58.997376 4619 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:b916c87806b7eadd83e0ca890c3c24fb990fc5beb48ddc4537e3384efd3e62f7" Jan 26 11:09:58 crc kubenswrapper[4619]: E0126 11:09:58.997556 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:b916c87806b7eadd83e0ca890c3c24fb990fc5beb48ddc4537e3384efd3e62f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rcmcs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-7478f7dbf9-6w9xz_openstack-operators(0236e799-d5fb-4edf-b0cf-b40093e13c9f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:09:58 crc kubenswrapper[4619]: E0126 11:09:58.999793 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6w9xz" podUID="0236e799-d5fb-4edf-b0cf-b40093e13c9f" Jan 26 11:09:59 crc kubenswrapper[4619]: E0126 11:09:59.412094 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:b916c87806b7eadd83e0ca890c3c24fb990fc5beb48ddc4537e3384efd3e62f7\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6w9xz" podUID="0236e799-d5fb-4edf-b0cf-b40093e13c9f" Jan 26 11:10:00 crc kubenswrapper[4619]: E0126 11:10:00.412086 4619 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd" Jan 26 11:10:00 crc kubenswrapper[4619]: E0126 11:10:00.412414 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mn2rt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4cd88d46-vvxv5_openstack-operators(8d9312b1-e850-4099-b5a4-60c113f009a3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:10:00 crc kubenswrapper[4619]: E0126 11:10:00.413717 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-vvxv5" podUID="8d9312b1-e850-4099-b5a4-60c113f009a3" Jan 26 11:10:00 crc kubenswrapper[4619]: I0126 11:10:00.704415 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kzvfk"] Jan 26 11:10:00 crc kubenswrapper[4619]: I0126 11:10:00.709879 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kzvfk" Jan 26 11:10:00 crc kubenswrapper[4619]: I0126 11:10:00.713475 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kzvfk"] Jan 26 11:10:00 crc kubenswrapper[4619]: I0126 11:10:00.726794 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ceffce6-7a1a-4ae6-9cfb-56ca067f359b-catalog-content\") pod \"redhat-operators-kzvfk\" (UID: \"2ceffce6-7a1a-4ae6-9cfb-56ca067f359b\") " pod="openshift-marketplace/redhat-operators-kzvfk" Jan 26 11:10:00 crc kubenswrapper[4619]: I0126 11:10:00.726850 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ceffce6-7a1a-4ae6-9cfb-56ca067f359b-utilities\") pod \"redhat-operators-kzvfk\" (UID: \"2ceffce6-7a1a-4ae6-9cfb-56ca067f359b\") " pod="openshift-marketplace/redhat-operators-kzvfk" Jan 26 11:10:00 crc kubenswrapper[4619]: I0126 11:10:00.726896 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mslc\" (UniqueName: \"kubernetes.io/projected/2ceffce6-7a1a-4ae6-9cfb-56ca067f359b-kube-api-access-2mslc\") pod \"redhat-operators-kzvfk\" (UID: \"2ceffce6-7a1a-4ae6-9cfb-56ca067f359b\") " pod="openshift-marketplace/redhat-operators-kzvfk" Jan 26 11:10:00 crc kubenswrapper[4619]: I0126 11:10:00.827885 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ceffce6-7a1a-4ae6-9cfb-56ca067f359b-catalog-content\") pod \"redhat-operators-kzvfk\" (UID: \"2ceffce6-7a1a-4ae6-9cfb-56ca067f359b\") " pod="openshift-marketplace/redhat-operators-kzvfk" Jan 26 11:10:00 crc kubenswrapper[4619]: I0126 11:10:00.827951 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ceffce6-7a1a-4ae6-9cfb-56ca067f359b-utilities\") pod \"redhat-operators-kzvfk\" (UID: \"2ceffce6-7a1a-4ae6-9cfb-56ca067f359b\") " pod="openshift-marketplace/redhat-operators-kzvfk" Jan 26 11:10:00 crc kubenswrapper[4619]: I0126 11:10:00.827996 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mslc\" (UniqueName: \"kubernetes.io/projected/2ceffce6-7a1a-4ae6-9cfb-56ca067f359b-kube-api-access-2mslc\") pod \"redhat-operators-kzvfk\" (UID: \"2ceffce6-7a1a-4ae6-9cfb-56ca067f359b\") " pod="openshift-marketplace/redhat-operators-kzvfk" Jan 26 11:10:00 crc kubenswrapper[4619]: I0126 11:10:00.828551 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ceffce6-7a1a-4ae6-9cfb-56ca067f359b-catalog-content\") pod \"redhat-operators-kzvfk\" (UID: \"2ceffce6-7a1a-4ae6-9cfb-56ca067f359b\") " pod="openshift-marketplace/redhat-operators-kzvfk" Jan 26 11:10:00 crc kubenswrapper[4619]: I0126 11:10:00.828606 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ceffce6-7a1a-4ae6-9cfb-56ca067f359b-utilities\") pod \"redhat-operators-kzvfk\" (UID: \"2ceffce6-7a1a-4ae6-9cfb-56ca067f359b\") " pod="openshift-marketplace/redhat-operators-kzvfk" Jan 26 11:10:00 crc kubenswrapper[4619]: I0126 11:10:00.846476 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mslc\" (UniqueName: \"kubernetes.io/projected/2ceffce6-7a1a-4ae6-9cfb-56ca067f359b-kube-api-access-2mslc\") pod \"redhat-operators-kzvfk\" (UID: \"2ceffce6-7a1a-4ae6-9cfb-56ca067f359b\") " pod="openshift-marketplace/redhat-operators-kzvfk" Jan 26 11:10:01 crc kubenswrapper[4619]: I0126 11:10:01.027502 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kzvfk" Jan 26 11:10:01 crc kubenswrapper[4619]: E0126 11:10:01.402169 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-vvxv5" podUID="8d9312b1-e850-4099-b5a4-60c113f009a3" Jan 26 11:10:01 crc kubenswrapper[4619]: E0126 11:10:01.512406 4619 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337" Jan 26 11:10:01 crc kubenswrapper[4619]: E0126 11:10:01.512609 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5cr2n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-78fdd796fd-x95m2_openstack-operators(0821bfee-e661-4cb0-9079-70ee60bdec02): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:10:01 crc kubenswrapper[4619]: E0126 11:10:01.513861 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-x95m2" podUID="0821bfee-e661-4cb0-9079-70ee60bdec02" Jan 26 11:10:02 crc kubenswrapper[4619]: E0126 11:10:02.409780 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337\\\"\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-x95m2" podUID="0821bfee-e661-4cb0-9079-70ee60bdec02" Jan 26 11:10:02 crc kubenswrapper[4619]: E0126 11:10:02.695796 4619 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b" Jan 26 11:10:02 crc kubenswrapper[4619]: E0126 11:10:02.695993 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r98jj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-r479p_openstack-operators(1200fb20-58ac-4e2b-aa47-d8e3bb34578b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:10:02 crc kubenswrapper[4619]: E0126 11:10:02.697376 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-r479p" podUID="1200fb20-58ac-4e2b-aa47-d8e3bb34578b" Jan 26 11:10:03 crc kubenswrapper[4619]: E0126 11:10:03.113456 4619 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 26 11:10:03 crc kubenswrapper[4619]: E0126 11:10:03.114132 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pnkdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-j8r8q_openstack-operators(9f67fdeb-3415-4da5-a78e-66f6afad477f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:10:03 crc kubenswrapper[4619]: E0126 11:10:03.115374 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j8r8q" podUID="9f67fdeb-3415-4da5-a78e-66f6afad477f" Jan 26 11:10:03 crc kubenswrapper[4619]: E0126 11:10:03.417448 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j8r8q" podUID="9f67fdeb-3415-4da5-a78e-66f6afad477f" Jan 26 11:10:03 crc kubenswrapper[4619]: E0126 11:10:03.646048 4619 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658" Jan 26 11:10:03 crc kubenswrapper[4619]: E0126 11:10:03.646249 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nws2n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7bdb645866-fzrxh_openstack-operators(d991f0cd-a82d-443e-b399-ab59ac238b0b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:10:03 crc kubenswrapper[4619]: E0126 11:10:03.648379 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fzrxh" podUID="d991f0cd-a82d-443e-b399-ab59ac238b0b" Jan 26 11:10:04 crc kubenswrapper[4619]: E0126 11:10:04.420206 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fzrxh" podUID="d991f0cd-a82d-443e-b399-ab59ac238b0b" Jan 26 11:10:05 crc kubenswrapper[4619]: I0126 11:10:05.649510 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-758868c854-h44rl"] Jan 26 11:10:05 crc kubenswrapper[4619]: W0126 11:10:05.681881 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod817a0b42_6961_46cf_b353_38aee1dab88c.slice/crio-548e7579d57d90b8ee711c871f7651613b613ecf1831c2cf0bda2c031eeeb6ed WatchSource:0}: Error finding container 548e7579d57d90b8ee711c871f7651613b613ecf1831c2cf0bda2c031eeeb6ed: Status 404 returned error can't find the container with id 548e7579d57d90b8ee711c871f7651613b613ecf1831c2cf0bda2c031eeeb6ed Jan 26 11:10:05 crc kubenswrapper[4619]: I0126 11:10:05.763530 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kzvfk"] Jan 26 11:10:05 crc kubenswrapper[4619]: I0126 11:10:05.818583 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kqf4"] Jan 26 11:10:05 crc kubenswrapper[4619]: I0126 11:10:05.873347 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zhptm"] Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.043165 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/366e3862-4a5d-447e-890e-1a1ed1d7bf5f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854nj84s\" (UID: \"366e3862-4a5d-447e-890e-1a1ed1d7bf5f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854nj84s" Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.050906 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/366e3862-4a5d-447e-890e-1a1ed1d7bf5f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854nj84s\" (UID: \"366e3862-4a5d-447e-890e-1a1ed1d7bf5f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854nj84s" Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.195814 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-6c9k4" Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.204893 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854nj84s" Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.347423 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-metrics-certs\") pod \"openstack-operator-controller-manager-77bff5b64d-pzhsq\" (UID: \"3b4348c7-3d25-4d2b-837e-5add3c85cd30\") " pod="openstack-operators/openstack-operator-controller-manager-77bff5b64d-pzhsq" Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.347491 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-webhook-certs\") pod \"openstack-operator-controller-manager-77bff5b64d-pzhsq\" (UID: \"3b4348c7-3d25-4d2b-837e-5add3c85cd30\") " pod="openstack-operators/openstack-operator-controller-manager-77bff5b64d-pzhsq" Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.351433 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-metrics-certs\") pod \"openstack-operator-controller-manager-77bff5b64d-pzhsq\" (UID: \"3b4348c7-3d25-4d2b-837e-5add3c85cd30\") " pod="openstack-operators/openstack-operator-controller-manager-77bff5b64d-pzhsq" Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.352156 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3b4348c7-3d25-4d2b-837e-5add3c85cd30-webhook-certs\") pod \"openstack-operator-controller-manager-77bff5b64d-pzhsq\" (UID: \"3b4348c7-3d25-4d2b-837e-5add3c85cd30\") " pod="openstack-operators/openstack-operator-controller-manager-77bff5b64d-pzhsq" Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.437086 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rpdwn" event={"ID":"7ca801e8-77b6-4ea2-8bd7-4aec3c0e3c7a","Type":"ContainerStarted","Data":"445bb21b89be0227254b17164d053852c4697612083a0eaeb15b9cb5d73d8563"} Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.437677 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rpdwn" Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.438695 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pwdwf" event={"ID":"9bd38ee3-e401-40e3-8fdc-73722e175d2f","Type":"ContainerStarted","Data":"44fb2a0548b8f17a7106957ae6e47e45ae3a38cbdb2fb54f1e528be35151816b"} Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.439029 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pwdwf" Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.440174 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-59hn2" event={"ID":"d75eb578-095c-4ad4-b85d-c78417306fb0","Type":"ContainerStarted","Data":"d029791f9dc8961c94652e117d1b48b2caf51942eab5ee6c44d01025b9cfc553"} Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.440489 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-59hn2" Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.441517 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jcstk" event={"ID":"ad531f03-5ce8-475e-923a-15a9561e79d0","Type":"ContainerStarted","Data":"f7b950d8b7ced0b08990d92933b9f732ded7ca1cf4b389dd29ddef1f689197d7"} Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.441859 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jcstk" Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.442559 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kzvfk" event={"ID":"2ceffce6-7a1a-4ae6-9cfb-56ca067f359b","Type":"ContainerStarted","Data":"74566f7b56806a49335a3b6a01fa48007b8a793faba280c982be70adbf2108ef"} Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.443927 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-5d6449f6dc-sd74p" event={"ID":"78e0a81b-7050-4a6b-8f89-b1f02cf2bed4","Type":"ContainerStarted","Data":"fbd09e2a7f1d6009758ef04a0f833160b509f7a459e2f72d1004364c8e3f2cb5"} Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.444731 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-5d6449f6dc-sd74p" Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.445517 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-758868c854-h44rl" event={"ID":"817a0b42-6961-46cf-b353-38aee1dab88c","Type":"ContainerStarted","Data":"548e7579d57d90b8ee711c871f7651613b613ecf1831c2cf0bda2c031eeeb6ed"} Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.446266 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhptm" event={"ID":"d179e424-352f-4f5b-afd3-c68b8e79c096","Type":"ContainerStarted","Data":"b908084380f63741758f8358bfe905a31c108441aace7a64a83458505650ca57"} Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.447011 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kqf4" event={"ID":"2212929e-05ed-45d7-a9b8-2c9c15fc5ca0","Type":"ContainerStarted","Data":"4fb331bb9104dd11dd6e30909568e5c3d713166c513c07337264b520b7776cc2"} Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.470263 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gpm8h" event={"ID":"f2d78077-e281-4b95-a576-892bf5eaea8d","Type":"ContainerStarted","Data":"47efc08b9e36adc97f6da82b5f3318a57e38f2e115c275b8ee677f653580ecc0"} Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.474511 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c8mj6" event={"ID":"3edab216-d77f-4b95-b98b-0ed86e9b2305","Type":"ContainerStarted","Data":"ebdcfaf097cda789d0acf8c87d3d02b8c65d63981d9e98f80f7275d8403812e4"} Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.475405 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c8mj6" Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.476888 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-fdtcd" event={"ID":"a6eb6ada-8607-4687-a235-e8c5f581e4b4","Type":"ContainerStarted","Data":"7d252ceff0e33b1bb3c8e933513f5c19feb6681d600ba007406a361e9fdd4172"} Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.477367 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-fdtcd" Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.478641 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g6t9k" event={"ID":"3ada408d-b7d5-4d35-b779-65be4855e174","Type":"ContainerStarted","Data":"4527af4e4f2a7ee4d8696f9056191e033ab5106691c6297f371eb6cbe3bd5d28"} Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.479078 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g6t9k" Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.490631 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rpdwn" podStartSLOduration=4.27389386 podStartE2EDuration="33.490600607s" podCreationTimestamp="2026-01-26 11:09:33 +0000 UTC" firstStartedPulling="2026-01-26 11:09:36.083820133 +0000 UTC m=+875.117860849" lastFinishedPulling="2026-01-26 11:10:05.30052688 +0000 UTC m=+904.334567596" observedRunningTime="2026-01-26 11:10:06.480319915 +0000 UTC m=+905.514360631" watchObservedRunningTime="2026-01-26 11:10:06.490600607 +0000 UTC m=+905.524641323" Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.494909 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-z44mm" event={"ID":"861bc5f6-bbf8-4626-aed7-a015389630d2","Type":"ContainerStarted","Data":"106c4b8ba932d52f709618ae4b1c2e1ba2dc9490c5f0a29f987eb119f679d21c"} Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.495480 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-z44mm" Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.520353 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jcstk" podStartSLOduration=4.303934995 podStartE2EDuration="33.520337484s" podCreationTimestamp="2026-01-26 11:09:33 +0000 UTC" firstStartedPulling="2026-01-26 11:09:36.057789819 +0000 UTC m=+875.091830535" lastFinishedPulling="2026-01-26 11:10:05.274192308 +0000 UTC m=+904.308233024" observedRunningTime="2026-01-26 11:10:06.519208712 +0000 UTC m=+905.553249428" watchObservedRunningTime="2026-01-26 11:10:06.520337484 +0000 UTC m=+905.554378200" Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.531996 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-x2khl" Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.535970 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-77bff5b64d-pzhsq" Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.581302 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pwdwf" podStartSLOduration=5.9943542480000005 podStartE2EDuration="33.581282645s" podCreationTimestamp="2026-01-26 11:09:33 +0000 UTC" firstStartedPulling="2026-01-26 11:09:35.513569199 +0000 UTC m=+874.547609915" lastFinishedPulling="2026-01-26 11:10:03.100497606 +0000 UTC m=+902.134538312" observedRunningTime="2026-01-26 11:10:06.578114998 +0000 UTC m=+905.612155714" watchObservedRunningTime="2026-01-26 11:10:06.581282645 +0000 UTC m=+905.615323361" Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.613203 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-fdtcd" podStartSLOduration=6.365588043 podStartE2EDuration="33.61318512s" podCreationTimestamp="2026-01-26 11:09:33 +0000 UTC" firstStartedPulling="2026-01-26 11:09:35.851973893 +0000 UTC m=+874.886014599" lastFinishedPulling="2026-01-26 11:10:03.09957096 +0000 UTC m=+902.133611676" observedRunningTime="2026-01-26 11:10:06.611740361 +0000 UTC m=+905.645781077" watchObservedRunningTime="2026-01-26 11:10:06.61318512 +0000 UTC m=+905.647225836" Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.671497 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-59hn2" podStartSLOduration=6.397767266 podStartE2EDuration="33.67148039s" podCreationTimestamp="2026-01-26 11:09:33 +0000 UTC" firstStartedPulling="2026-01-26 11:09:35.825818085 +0000 UTC m=+874.859858801" lastFinishedPulling="2026-01-26 11:10:03.099531209 +0000 UTC m=+902.133571925" observedRunningTime="2026-01-26 11:10:06.666540694 +0000 UTC m=+905.700581410" watchObservedRunningTime="2026-01-26 11:10:06.67148039 +0000 UTC m=+905.705521116" Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.701761 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g6t9k" podStartSLOduration=6.118124345 podStartE2EDuration="33.70174514s" podCreationTimestamp="2026-01-26 11:09:33 +0000 UTC" firstStartedPulling="2026-01-26 11:09:35.516878401 +0000 UTC m=+874.550919117" lastFinishedPulling="2026-01-26 11:10:03.100499196 +0000 UTC m=+902.134539912" observedRunningTime="2026-01-26 11:10:06.69737155 +0000 UTC m=+905.731412266" watchObservedRunningTime="2026-01-26 11:10:06.70174514 +0000 UTC m=+905.735785846" Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.753608 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-5d6449f6dc-sd74p" podStartSLOduration=6.189956234 podStartE2EDuration="33.753594692s" podCreationTimestamp="2026-01-26 11:09:33 +0000 UTC" firstStartedPulling="2026-01-26 11:09:35.152846954 +0000 UTC m=+874.186887670" lastFinishedPulling="2026-01-26 11:10:02.716485412 +0000 UTC m=+901.750526128" observedRunningTime="2026-01-26 11:10:06.731505396 +0000 UTC m=+905.765546112" watchObservedRunningTime="2026-01-26 11:10:06.753594692 +0000 UTC m=+905.787635408" Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.757017 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c8mj6" podStartSLOduration=3.97203159 podStartE2EDuration="33.757008846s" podCreationTimestamp="2026-01-26 11:09:33 +0000 UTC" firstStartedPulling="2026-01-26 11:09:35.533039544 +0000 UTC m=+874.567080260" lastFinishedPulling="2026-01-26 11:10:05.3180168 +0000 UTC m=+904.352057516" observedRunningTime="2026-01-26 11:10:06.749341085 +0000 UTC m=+905.783381801" watchObservedRunningTime="2026-01-26 11:10:06.757008846 +0000 UTC m=+905.791049562" Jan 26 11:10:06 crc kubenswrapper[4619]: I0126 11:10:06.779942 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-z44mm" podStartSLOduration=4.673640727 podStartE2EDuration="33.779915184s" podCreationTimestamp="2026-01-26 11:09:33 +0000 UTC" firstStartedPulling="2026-01-26 11:09:36.040102184 +0000 UTC m=+875.074142900" lastFinishedPulling="2026-01-26 11:10:05.146376641 +0000 UTC m=+904.180417357" observedRunningTime="2026-01-26 11:10:06.779194214 +0000 UTC m=+905.813234930" watchObservedRunningTime="2026-01-26 11:10:06.779915184 +0000 UTC m=+905.813955900" Jan 26 11:10:07 crc kubenswrapper[4619]: I0126 11:10:07.281268 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854nj84s"] Jan 26 11:10:07 crc kubenswrapper[4619]: I0126 11:10:07.387912 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-77bff5b64d-pzhsq"] Jan 26 11:10:07 crc kubenswrapper[4619]: I0126 11:10:07.501863 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854nj84s" event={"ID":"366e3862-4a5d-447e-890e-1a1ed1d7bf5f","Type":"ContainerStarted","Data":"45119c6e3ad7549c4888b271d42440ca1a478e985ca587c75dfcab512114069c"} Jan 26 11:10:07 crc kubenswrapper[4619]: I0126 11:10:07.502812 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-77bff5b64d-pzhsq" event={"ID":"3b4348c7-3d25-4d2b-837e-5add3c85cd30","Type":"ContainerStarted","Data":"470c11e693b8c544abba49a24209b066fec57fbaf4f5ab94324c7a1ed8570e3f"} Jan 26 11:10:08 crc kubenswrapper[4619]: I0126 11:10:08.510185 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2wgql" event={"ID":"67d01b92-a260-4a23-a395-1e2c5079dbed","Type":"ContainerStarted","Data":"af5b5edc0f1a00b72c2a6fb99aa45c8806883c24b4709d462efc6b23a3da336d"} Jan 26 11:10:08 crc kubenswrapper[4619]: I0126 11:10:08.510815 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2wgql" Jan 26 11:10:08 crc kubenswrapper[4619]: I0126 11:10:08.511521 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-77bff5b64d-pzhsq" event={"ID":"3b4348c7-3d25-4d2b-837e-5add3c85cd30","Type":"ContainerStarted","Data":"6b29ab99b784a4f3741934e9fd0f4c33851cac3cfe34ecd6c0243592f886db68"} Jan 26 11:10:08 crc kubenswrapper[4619]: I0126 11:10:08.515279 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ltc6c" event={"ID":"097a933b-c278-4367-881a-bbd0942d69b3","Type":"ContainerStarted","Data":"2ee92631be6ac1e5c0af3d67e3ef9aa733a0c831dff1f0f35b8a35e4684c5922"} Jan 26 11:10:08 crc kubenswrapper[4619]: I0126 11:10:08.515659 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ltc6c" Jan 26 11:10:08 crc kubenswrapper[4619]: I0126 11:10:08.519164 4619 generic.go:334] "Generic (PLEG): container finished" podID="d179e424-352f-4f5b-afd3-c68b8e79c096" containerID="84aee242e7284e58bab00fa481af8c5f211e68eababd579e775a3d33484c5569" exitCode=0 Jan 26 11:10:08 crc kubenswrapper[4619]: I0126 11:10:08.519330 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhptm" event={"ID":"d179e424-352f-4f5b-afd3-c68b8e79c096","Type":"ContainerDied","Data":"84aee242e7284e58bab00fa481af8c5f211e68eababd579e775a3d33484c5569"} Jan 26 11:10:08 crc kubenswrapper[4619]: I0126 11:10:08.520944 4619 generic.go:334] "Generic (PLEG): container finished" podID="2ceffce6-7a1a-4ae6-9cfb-56ca067f359b" containerID="04fa609b83625bd135c5913c0d79f9e1b99b642cc6c8e1be328cc37d7741eacc" exitCode=0 Jan 26 11:10:08 crc kubenswrapper[4619]: I0126 11:10:08.521069 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kzvfk" event={"ID":"2ceffce6-7a1a-4ae6-9cfb-56ca067f359b","Type":"ContainerDied","Data":"04fa609b83625bd135c5913c0d79f9e1b99b642cc6c8e1be328cc37d7741eacc"} Jan 26 11:10:08 crc kubenswrapper[4619]: I0126 11:10:08.525588 4619 generic.go:334] "Generic (PLEG): container finished" podID="2212929e-05ed-45d7-a9b8-2c9c15fc5ca0" containerID="eeba73bec92e96a9450ae3c0d833ef33a63183552350ff706e86df4561a475b5" exitCode=0 Jan 26 11:10:08 crc kubenswrapper[4619]: I0126 11:10:08.526602 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kqf4" event={"ID":"2212929e-05ed-45d7-a9b8-2c9c15fc5ca0","Type":"ContainerDied","Data":"eeba73bec92e96a9450ae3c0d833ef33a63183552350ff706e86df4561a475b5"} Jan 26 11:10:08 crc kubenswrapper[4619]: I0126 11:10:08.528104 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gpm8h" Jan 26 11:10:08 crc kubenswrapper[4619]: I0126 11:10:08.536098 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2wgql" podStartSLOduration=4.9126026320000005 podStartE2EDuration="35.536080931s" podCreationTimestamp="2026-01-26 11:09:33 +0000 UTC" firstStartedPulling="2026-01-26 11:09:35.176474392 +0000 UTC m=+874.210515108" lastFinishedPulling="2026-01-26 11:10:05.799952691 +0000 UTC m=+904.833993407" observedRunningTime="2026-01-26 11:10:08.530183179 +0000 UTC m=+907.564223895" watchObservedRunningTime="2026-01-26 11:10:08.536080931 +0000 UTC m=+907.570121647" Jan 26 11:10:08 crc kubenswrapper[4619]: I0126 11:10:08.552780 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ltc6c" podStartSLOduration=5.154457696 podStartE2EDuration="35.552754108s" podCreationTimestamp="2026-01-26 11:09:33 +0000 UTC" firstStartedPulling="2026-01-26 11:09:35.400985991 +0000 UTC m=+874.435026707" lastFinishedPulling="2026-01-26 11:10:05.799282403 +0000 UTC m=+904.833323119" observedRunningTime="2026-01-26 11:10:08.544014829 +0000 UTC m=+907.578055565" watchObservedRunningTime="2026-01-26 11:10:08.552754108 +0000 UTC m=+907.586794824" Jan 26 11:10:08 crc kubenswrapper[4619]: I0126 11:10:08.616752 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gpm8h" podStartSLOduration=6.566722871 podStartE2EDuration="35.616726944s" podCreationTimestamp="2026-01-26 11:09:33 +0000 UTC" firstStartedPulling="2026-01-26 11:09:36.120292144 +0000 UTC m=+875.154332860" lastFinishedPulling="2026-01-26 11:10:05.170296217 +0000 UTC m=+904.204336933" observedRunningTime="2026-01-26 11:10:08.589677832 +0000 UTC m=+907.623718558" watchObservedRunningTime="2026-01-26 11:10:08.616726944 +0000 UTC m=+907.650767660" Jan 26 11:10:09 crc kubenswrapper[4619]: I0126 11:10:09.533384 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2vtpj" event={"ID":"146ce69f-077f-483b-a7f6-d32bb6e2ad05","Type":"ContainerStarted","Data":"441917cc2d590a5cd190c6464f114043d37f4d6334fd7baa43901953483dd904"} Jan 26 11:10:09 crc kubenswrapper[4619]: I0126 11:10:09.533869 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2vtpj" Jan 26 11:10:09 crc kubenswrapper[4619]: I0126 11:10:09.536994 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qvfcm" event={"ID":"c4c33d5c-a111-42bd-932d-7b60aaa798be","Type":"ContainerStarted","Data":"0275f850d111f868181e9eb04432b23d5f4c04e3e892077507ce87b90c72db16"} Jan 26 11:10:09 crc kubenswrapper[4619]: I0126 11:10:09.562268 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2vtpj" podStartSLOduration=3.455849899 podStartE2EDuration="36.562252642s" podCreationTimestamp="2026-01-26 11:09:33 +0000 UTC" firstStartedPulling="2026-01-26 11:09:35.78879956 +0000 UTC m=+874.822840276" lastFinishedPulling="2026-01-26 11:10:08.895202303 +0000 UTC m=+907.929243019" observedRunningTime="2026-01-26 11:10:09.559020504 +0000 UTC m=+908.593061230" watchObservedRunningTime="2026-01-26 11:10:09.562252642 +0000 UTC m=+908.596293368" Jan 26 11:10:09 crc kubenswrapper[4619]: I0126 11:10:09.581457 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qvfcm" podStartSLOduration=2.941341265 podStartE2EDuration="36.581439639s" podCreationTimestamp="2026-01-26 11:09:33 +0000 UTC" firstStartedPulling="2026-01-26 11:09:35.001891923 +0000 UTC m=+874.035932639" lastFinishedPulling="2026-01-26 11:10:08.641990297 +0000 UTC m=+907.676031013" observedRunningTime="2026-01-26 11:10:09.577995174 +0000 UTC m=+908.612035890" watchObservedRunningTime="2026-01-26 11:10:09.581439639 +0000 UTC m=+908.615480365" Jan 26 11:10:10 crc kubenswrapper[4619]: I0126 11:10:10.284685 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-77bff5b64d-pzhsq" podStartSLOduration=36.281947686 podStartE2EDuration="36.281947686s" podCreationTimestamp="2026-01-26 11:09:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:10:09.600543972 +0000 UTC m=+908.634584688" watchObservedRunningTime="2026-01-26 11:10:10.281947686 +0000 UTC m=+909.315988402" Jan 26 11:10:13 crc kubenswrapper[4619]: I0126 11:10:13.599077 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-5d6449f6dc-sd74p" Jan 26 11:10:13 crc kubenswrapper[4619]: I0126 11:10:13.630870 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qvfcm" Jan 26 11:10:13 crc kubenswrapper[4619]: I0126 11:10:13.705795 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2wgql" Jan 26 11:10:14 crc kubenswrapper[4619]: I0126 11:10:14.015520 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g6t9k" Jan 26 11:10:14 crc kubenswrapper[4619]: I0126 11:10:14.043156 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pwdwf" Jan 26 11:10:14 crc kubenswrapper[4619]: I0126 11:10:14.062902 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ltc6c" Jan 26 11:10:14 crc kubenswrapper[4619]: I0126 11:10:14.099488 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-59hn2" Jan 26 11:10:14 crc kubenswrapper[4619]: I0126 11:10:14.138696 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2vtpj" Jan 26 11:10:14 crc kubenswrapper[4619]: I0126 11:10:14.176412 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c8mj6" Jan 26 11:10:14 crc kubenswrapper[4619]: I0126 11:10:14.235102 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:10:14 crc kubenswrapper[4619]: I0126 11:10:14.235151 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:10:14 crc kubenswrapper[4619]: E0126 11:10:14.263458 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-r479p" podUID="1200fb20-58ac-4e2b-aa47-d8e3bb34578b" Jan 26 11:10:14 crc kubenswrapper[4619]: I0126 11:10:14.379846 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-fdtcd" Jan 26 11:10:14 crc kubenswrapper[4619]: I0126 11:10:14.417689 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rpdwn" Jan 26 11:10:14 crc kubenswrapper[4619]: I0126 11:10:14.453469 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jcstk" Jan 26 11:10:14 crc kubenswrapper[4619]: I0126 11:10:14.480017 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-z44mm" Jan 26 11:10:14 crc kubenswrapper[4619]: I0126 11:10:14.572429 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gpm8h" Jan 26 11:10:16 crc kubenswrapper[4619]: I0126 11:10:16.536732 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-77bff5b64d-pzhsq" Jan 26 11:10:16 crc kubenswrapper[4619]: I0126 11:10:16.548445 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-77bff5b64d-pzhsq" Jan 26 11:10:16 crc kubenswrapper[4619]: I0126 11:10:16.593716 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kqf4" event={"ID":"2212929e-05ed-45d7-a9b8-2c9c15fc5ca0","Type":"ContainerStarted","Data":"63e770f55e08bd8a09bee069d96c7e69e4a487d443727970757bf07aa2352c55"} Jan 26 11:10:16 crc kubenswrapper[4619]: I0126 11:10:16.597569 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854nj84s" event={"ID":"366e3862-4a5d-447e-890e-1a1ed1d7bf5f","Type":"ContainerStarted","Data":"e1b32f21531e4c246a2597cf7a9b2ee4453ba5f0a1a3b44f523e7a434d250d2d"} Jan 26 11:10:16 crc kubenswrapper[4619]: I0126 11:10:16.598041 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854nj84s" Jan 26 11:10:16 crc kubenswrapper[4619]: I0126 11:10:16.599782 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-758868c854-h44rl" event={"ID":"817a0b42-6961-46cf-b353-38aee1dab88c","Type":"ContainerStarted","Data":"e0c88d684473cd01ca1d4d02301168938569660fd3f2dddd236a25ebb10f9578"} Jan 26 11:10:16 crc kubenswrapper[4619]: I0126 11:10:16.600202 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-758868c854-h44rl" Jan 26 11:10:16 crc kubenswrapper[4619]: I0126 11:10:16.601472 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-x95m2" event={"ID":"0821bfee-e661-4cb0-9079-70ee60bdec02","Type":"ContainerStarted","Data":"64f54c5dbb47787ca24319428f4e75962172b5061e3d926e13bc2376eeb9749d"} Jan 26 11:10:16 crc kubenswrapper[4619]: I0126 11:10:16.601900 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-x95m2" Jan 26 11:10:16 crc kubenswrapper[4619]: I0126 11:10:16.621257 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6w9xz" event={"ID":"0236e799-d5fb-4edf-b0cf-b40093e13c9f","Type":"ContainerStarted","Data":"3be60c826afd9646881e5ba2d21f0559738f3813f8d349dff36b2bd0dd1c2ab8"} Jan 26 11:10:16 crc kubenswrapper[4619]: I0126 11:10:16.621967 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6w9xz" Jan 26 11:10:16 crc kubenswrapper[4619]: I0126 11:10:16.624835 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhptm" event={"ID":"d179e424-352f-4f5b-afd3-c68b8e79c096","Type":"ContainerStarted","Data":"96d729f72cdadbd14ce09fc9a0bec9d357dddcc5d6733a2705ea917967b79987"} Jan 26 11:10:16 crc kubenswrapper[4619]: I0126 11:10:16.635070 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kzvfk" event={"ID":"2ceffce6-7a1a-4ae6-9cfb-56ca067f359b","Type":"ContainerStarted","Data":"5747e68df6473302f11bee01ec25fa7667714fb3706ef2d27b249af176d86edf"} Jan 26 11:10:16 crc kubenswrapper[4619]: I0126 11:10:16.715020 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6w9xz" podStartSLOduration=3.016555958 podStartE2EDuration="43.715004705s" podCreationTimestamp="2026-01-26 11:09:33 +0000 UTC" firstStartedPulling="2026-01-26 11:09:35.14979882 +0000 UTC m=+874.183839536" lastFinishedPulling="2026-01-26 11:10:15.848247527 +0000 UTC m=+914.882288283" observedRunningTime="2026-01-26 11:10:16.708004223 +0000 UTC m=+915.742044939" watchObservedRunningTime="2026-01-26 11:10:16.715004705 +0000 UTC m=+915.749045421" Jan 26 11:10:16 crc kubenswrapper[4619]: I0126 11:10:16.769059 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854nj84s" podStartSLOduration=35.244438079 podStartE2EDuration="43.769042127s" podCreationTimestamp="2026-01-26 11:09:33 +0000 UTC" firstStartedPulling="2026-01-26 11:10:07.32220089 +0000 UTC m=+906.356241606" lastFinishedPulling="2026-01-26 11:10:15.846804898 +0000 UTC m=+914.880845654" observedRunningTime="2026-01-26 11:10:16.745701107 +0000 UTC m=+915.779741823" watchObservedRunningTime="2026-01-26 11:10:16.769042127 +0000 UTC m=+915.803082853" Jan 26 11:10:16 crc kubenswrapper[4619]: I0126 11:10:16.770782 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-758868c854-h44rl" podStartSLOduration=33.757644664 podStartE2EDuration="43.770774975s" podCreationTimestamp="2026-01-26 11:09:33 +0000 UTC" firstStartedPulling="2026-01-26 11:10:05.710719623 +0000 UTC m=+904.744760339" lastFinishedPulling="2026-01-26 11:10:15.723849924 +0000 UTC m=+914.757890650" observedRunningTime="2026-01-26 11:10:16.764973865 +0000 UTC m=+915.799014581" watchObservedRunningTime="2026-01-26 11:10:16.770774975 +0000 UTC m=+915.804815691" Jan 26 11:10:16 crc kubenswrapper[4619]: I0126 11:10:16.790397 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-x95m2" podStartSLOduration=3.424672025 podStartE2EDuration="43.790382363s" podCreationTimestamp="2026-01-26 11:09:33 +0000 UTC" firstStartedPulling="2026-01-26 11:09:35.483401072 +0000 UTC m=+874.517441788" lastFinishedPulling="2026-01-26 11:10:15.84911137 +0000 UTC m=+914.883152126" observedRunningTime="2026-01-26 11:10:16.785027396 +0000 UTC m=+915.819068112" watchObservedRunningTime="2026-01-26 11:10:16.790382363 +0000 UTC m=+915.824423079" Jan 26 11:10:17 crc kubenswrapper[4619]: I0126 11:10:17.643743 4619 generic.go:334] "Generic (PLEG): container finished" podID="2212929e-05ed-45d7-a9b8-2c9c15fc5ca0" containerID="63e770f55e08bd8a09bee069d96c7e69e4a487d443727970757bf07aa2352c55" exitCode=0 Jan 26 11:10:17 crc kubenswrapper[4619]: I0126 11:10:17.643934 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kqf4" event={"ID":"2212929e-05ed-45d7-a9b8-2c9c15fc5ca0","Type":"ContainerDied","Data":"63e770f55e08bd8a09bee069d96c7e69e4a487d443727970757bf07aa2352c55"} Jan 26 11:10:18 crc kubenswrapper[4619]: I0126 11:10:18.653450 4619 generic.go:334] "Generic (PLEG): container finished" podID="d179e424-352f-4f5b-afd3-c68b8e79c096" containerID="96d729f72cdadbd14ce09fc9a0bec9d357dddcc5d6733a2705ea917967b79987" exitCode=0 Jan 26 11:10:18 crc kubenswrapper[4619]: I0126 11:10:18.653530 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhptm" event={"ID":"d179e424-352f-4f5b-afd3-c68b8e79c096","Type":"ContainerDied","Data":"96d729f72cdadbd14ce09fc9a0bec9d357dddcc5d6733a2705ea917967b79987"} Jan 26 11:10:19 crc kubenswrapper[4619]: I0126 11:10:19.665993 4619 generic.go:334] "Generic (PLEG): container finished" podID="2ceffce6-7a1a-4ae6-9cfb-56ca067f359b" containerID="5747e68df6473302f11bee01ec25fa7667714fb3706ef2d27b249af176d86edf" exitCode=0 Jan 26 11:10:19 crc kubenswrapper[4619]: I0126 11:10:19.670279 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kzvfk" event={"ID":"2ceffce6-7a1a-4ae6-9cfb-56ca067f359b","Type":"ContainerDied","Data":"5747e68df6473302f11bee01ec25fa7667714fb3706ef2d27b249af176d86edf"} Jan 26 11:10:20 crc kubenswrapper[4619]: I0126 11:10:20.679108 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kzvfk" event={"ID":"2ceffce6-7a1a-4ae6-9cfb-56ca067f359b","Type":"ContainerStarted","Data":"e841bc225cb2976db03525cbf773aad7fa33fe6582314db7ba1bcecade744037"} Jan 26 11:10:20 crc kubenswrapper[4619]: I0126 11:10:20.680549 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kqf4" event={"ID":"2212929e-05ed-45d7-a9b8-2c9c15fc5ca0","Type":"ContainerStarted","Data":"b2884332ded261f370dd42ca6d44293788c2abdae1e5f8ca34e08dddd0368d08"} Jan 26 11:10:20 crc kubenswrapper[4619]: I0126 11:10:20.681486 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-vvxv5" event={"ID":"8d9312b1-e850-4099-b5a4-60c113f009a3","Type":"ContainerStarted","Data":"1b8d45174e320ad9fd315310ce1f6fa150f2140a0fba03611fc660ab3f192f79"} Jan 26 11:10:20 crc kubenswrapper[4619]: I0126 11:10:20.681891 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-vvxv5" Jan 26 11:10:20 crc kubenswrapper[4619]: I0126 11:10:20.682538 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j8r8q" event={"ID":"9f67fdeb-3415-4da5-a78e-66f6afad477f","Type":"ContainerStarted","Data":"e6e9a612187b619e78ea1318766b24b86be0553fe8ff9661ae3a6932c15aeec6"} Jan 26 11:10:20 crc kubenswrapper[4619]: I0126 11:10:20.685251 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhptm" event={"ID":"d179e424-352f-4f5b-afd3-c68b8e79c096","Type":"ContainerStarted","Data":"dc9c85daab11839b32f2200f4015a32785e17bded355039fbc31a1397c4a7ce1"} Jan 26 11:10:20 crc kubenswrapper[4619]: I0126 11:10:20.702408 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kzvfk" podStartSLOduration=9.119369134 podStartE2EDuration="20.702391442s" podCreationTimestamp="2026-01-26 11:10:00 +0000 UTC" firstStartedPulling="2026-01-26 11:10:08.523922048 +0000 UTC m=+907.557962764" lastFinishedPulling="2026-01-26 11:10:20.106944356 +0000 UTC m=+919.140985072" observedRunningTime="2026-01-26 11:10:20.697041665 +0000 UTC m=+919.731082381" watchObservedRunningTime="2026-01-26 11:10:20.702391442 +0000 UTC m=+919.736432158" Jan 26 11:10:20 crc kubenswrapper[4619]: I0126 11:10:20.720595 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zhptm" podStartSLOduration=12.427340342 podStartE2EDuration="23.72057907s" podCreationTimestamp="2026-01-26 11:09:57 +0000 UTC" firstStartedPulling="2026-01-26 11:10:08.521218744 +0000 UTC m=+907.555259450" lastFinishedPulling="2026-01-26 11:10:19.814457472 +0000 UTC m=+918.848498178" observedRunningTime="2026-01-26 11:10:20.716188119 +0000 UTC m=+919.750228835" watchObservedRunningTime="2026-01-26 11:10:20.72057907 +0000 UTC m=+919.754619786" Jan 26 11:10:20 crc kubenswrapper[4619]: I0126 11:10:20.734706 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j8r8q" podStartSLOduration=2.95600181 podStartE2EDuration="46.734690037s" podCreationTimestamp="2026-01-26 11:09:34 +0000 UTC" firstStartedPulling="2026-01-26 11:09:36.035140198 +0000 UTC m=+875.069180914" lastFinishedPulling="2026-01-26 11:10:19.813828415 +0000 UTC m=+918.847869141" observedRunningTime="2026-01-26 11:10:20.730663407 +0000 UTC m=+919.764704123" watchObservedRunningTime="2026-01-26 11:10:20.734690037 +0000 UTC m=+919.768730753" Jan 26 11:10:20 crc kubenswrapper[4619]: I0126 11:10:20.758641 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4kqf4" podStartSLOduration=19.262290604 podStartE2EDuration="30.758607724s" podCreationTimestamp="2026-01-26 11:09:50 +0000 UTC" firstStartedPulling="2026-01-26 11:10:08.527560847 +0000 UTC m=+907.561601563" lastFinishedPulling="2026-01-26 11:10:20.023877967 +0000 UTC m=+919.057918683" observedRunningTime="2026-01-26 11:10:20.754133011 +0000 UTC m=+919.788173727" watchObservedRunningTime="2026-01-26 11:10:20.758607724 +0000 UTC m=+919.792648440" Jan 26 11:10:20 crc kubenswrapper[4619]: I0126 11:10:20.790700 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-vvxv5" podStartSLOduration=3.822476146 podStartE2EDuration="47.790679083s" podCreationTimestamp="2026-01-26 11:09:33 +0000 UTC" firstStartedPulling="2026-01-26 11:09:35.730607843 +0000 UTC m=+874.764648559" lastFinishedPulling="2026-01-26 11:10:19.69881078 +0000 UTC m=+918.732851496" observedRunningTime="2026-01-26 11:10:20.785687526 +0000 UTC m=+919.819728242" watchObservedRunningTime="2026-01-26 11:10:20.790679083 +0000 UTC m=+919.824719789" Jan 26 11:10:21 crc kubenswrapper[4619]: I0126 11:10:21.028594 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kzvfk" Jan 26 11:10:21 crc kubenswrapper[4619]: I0126 11:10:21.028676 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kzvfk" Jan 26 11:10:21 crc kubenswrapper[4619]: I0126 11:10:21.712529 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fzrxh" event={"ID":"d991f0cd-a82d-443e-b399-ab59ac238b0b","Type":"ContainerStarted","Data":"e1225e66bd33fcd98da27c83234b78817bb7d1d9536955182cc1b93e15c50533"} Jan 26 11:10:21 crc kubenswrapper[4619]: I0126 11:10:21.713569 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fzrxh" Jan 26 11:10:21 crc kubenswrapper[4619]: I0126 11:10:21.734124 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fzrxh" podStartSLOduration=3.458610415 podStartE2EDuration="48.734091583s" podCreationTimestamp="2026-01-26 11:09:33 +0000 UTC" firstStartedPulling="2026-01-26 11:09:35.494904018 +0000 UTC m=+874.528944734" lastFinishedPulling="2026-01-26 11:10:20.770385186 +0000 UTC m=+919.804425902" observedRunningTime="2026-01-26 11:10:21.732941523 +0000 UTC m=+920.766982239" watchObservedRunningTime="2026-01-26 11:10:21.734091583 +0000 UTC m=+920.768132299" Jan 26 11:10:22 crc kubenswrapper[4619]: I0126 11:10:22.077870 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kzvfk" podUID="2ceffce6-7a1a-4ae6-9cfb-56ca067f359b" containerName="registry-server" probeResult="failure" output=< Jan 26 11:10:22 crc kubenswrapper[4619]: timeout: failed to connect service ":50051" within 1s Jan 26 11:10:22 crc kubenswrapper[4619]: > Jan 26 11:10:23 crc kubenswrapper[4619]: I0126 11:10:23.619945 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6w9xz" Jan 26 11:10:23 crc kubenswrapper[4619]: I0126 11:10:23.632158 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qvfcm" Jan 26 11:10:23 crc kubenswrapper[4619]: I0126 11:10:23.686899 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-x95m2" Jan 26 11:10:24 crc kubenswrapper[4619]: I0126 11:10:24.341767 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-vvxv5" Jan 26 11:10:24 crc kubenswrapper[4619]: I0126 11:10:24.900900 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b957r"] Jan 26 11:10:24 crc kubenswrapper[4619]: I0126 11:10:24.902523 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b957r" Jan 26 11:10:24 crc kubenswrapper[4619]: I0126 11:10:24.907436 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b957r"] Jan 26 11:10:25 crc kubenswrapper[4619]: I0126 11:10:25.011810 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82faa0af-ddd1-4066-947b-8ffeae6e6896-catalog-content\") pod \"certified-operators-b957r\" (UID: \"82faa0af-ddd1-4066-947b-8ffeae6e6896\") " pod="openshift-marketplace/certified-operators-b957r" Jan 26 11:10:25 crc kubenswrapper[4619]: I0126 11:10:25.011977 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82faa0af-ddd1-4066-947b-8ffeae6e6896-utilities\") pod \"certified-operators-b957r\" (UID: \"82faa0af-ddd1-4066-947b-8ffeae6e6896\") " pod="openshift-marketplace/certified-operators-b957r" Jan 26 11:10:25 crc kubenswrapper[4619]: I0126 11:10:25.012016 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzcgj\" (UniqueName: \"kubernetes.io/projected/82faa0af-ddd1-4066-947b-8ffeae6e6896-kube-api-access-lzcgj\") pod \"certified-operators-b957r\" (UID: \"82faa0af-ddd1-4066-947b-8ffeae6e6896\") " pod="openshift-marketplace/certified-operators-b957r" Jan 26 11:10:25 crc kubenswrapper[4619]: I0126 11:10:25.112928 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82faa0af-ddd1-4066-947b-8ffeae6e6896-utilities\") pod \"certified-operators-b957r\" (UID: \"82faa0af-ddd1-4066-947b-8ffeae6e6896\") " pod="openshift-marketplace/certified-operators-b957r" Jan 26 11:10:25 crc kubenswrapper[4619]: I0126 11:10:25.112982 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzcgj\" (UniqueName: \"kubernetes.io/projected/82faa0af-ddd1-4066-947b-8ffeae6e6896-kube-api-access-lzcgj\") pod \"certified-operators-b957r\" (UID: \"82faa0af-ddd1-4066-947b-8ffeae6e6896\") " pod="openshift-marketplace/certified-operators-b957r" Jan 26 11:10:25 crc kubenswrapper[4619]: I0126 11:10:25.113028 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82faa0af-ddd1-4066-947b-8ffeae6e6896-catalog-content\") pod \"certified-operators-b957r\" (UID: \"82faa0af-ddd1-4066-947b-8ffeae6e6896\") " pod="openshift-marketplace/certified-operators-b957r" Jan 26 11:10:25 crc kubenswrapper[4619]: I0126 11:10:25.113414 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82faa0af-ddd1-4066-947b-8ffeae6e6896-utilities\") pod \"certified-operators-b957r\" (UID: \"82faa0af-ddd1-4066-947b-8ffeae6e6896\") " pod="openshift-marketplace/certified-operators-b957r" Jan 26 11:10:25 crc kubenswrapper[4619]: I0126 11:10:25.113474 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82faa0af-ddd1-4066-947b-8ffeae6e6896-catalog-content\") pod \"certified-operators-b957r\" (UID: \"82faa0af-ddd1-4066-947b-8ffeae6e6896\") " pod="openshift-marketplace/certified-operators-b957r" Jan 26 11:10:25 crc kubenswrapper[4619]: I0126 11:10:25.142216 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzcgj\" (UniqueName: \"kubernetes.io/projected/82faa0af-ddd1-4066-947b-8ffeae6e6896-kube-api-access-lzcgj\") pod \"certified-operators-b957r\" (UID: \"82faa0af-ddd1-4066-947b-8ffeae6e6896\") " pod="openshift-marketplace/certified-operators-b957r" Jan 26 11:10:25 crc kubenswrapper[4619]: I0126 11:10:25.222558 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b957r" Jan 26 11:10:25 crc kubenswrapper[4619]: I0126 11:10:25.739237 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b957r"] Jan 26 11:10:25 crc kubenswrapper[4619]: W0126 11:10:25.764918 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa0af_ddd1_4066_947b_8ffeae6e6896.slice/crio-3d7886650b77336c9836531ed936c800031b0415360aec62344097b3296583f6 WatchSource:0}: Error finding container 3d7886650b77336c9836531ed936c800031b0415360aec62344097b3296583f6: Status 404 returned error can't find the container with id 3d7886650b77336c9836531ed936c800031b0415360aec62344097b3296583f6 Jan 26 11:10:26 crc kubenswrapper[4619]: I0126 11:10:26.212098 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854nj84s" Jan 26 11:10:26 crc kubenswrapper[4619]: I0126 11:10:26.745314 4619 generic.go:334] "Generic (PLEG): container finished" podID="82faa0af-ddd1-4066-947b-8ffeae6e6896" containerID="ff06ff1ae345012ab957081e0f53a70e48430350096598875d2f148956ab0a7f" exitCode=0 Jan 26 11:10:26 crc kubenswrapper[4619]: I0126 11:10:26.745383 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b957r" event={"ID":"82faa0af-ddd1-4066-947b-8ffeae6e6896","Type":"ContainerDied","Data":"ff06ff1ae345012ab957081e0f53a70e48430350096598875d2f148956ab0a7f"} Jan 26 11:10:26 crc kubenswrapper[4619]: I0126 11:10:26.745433 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b957r" event={"ID":"82faa0af-ddd1-4066-947b-8ffeae6e6896","Type":"ContainerStarted","Data":"3d7886650b77336c9836531ed936c800031b0415360aec62344097b3296583f6"} Jan 26 11:10:27 crc kubenswrapper[4619]: I0126 11:10:27.657030 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zhptm" Jan 26 11:10:27 crc kubenswrapper[4619]: I0126 11:10:27.657363 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zhptm" Jan 26 11:10:27 crc kubenswrapper[4619]: I0126 11:10:27.699771 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zhptm" Jan 26 11:10:27 crc kubenswrapper[4619]: I0126 11:10:27.788511 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zhptm" Jan 26 11:10:28 crc kubenswrapper[4619]: I0126 11:10:28.761256 4619 generic.go:334] "Generic (PLEG): container finished" podID="82faa0af-ddd1-4066-947b-8ffeae6e6896" containerID="de27231775ea12f8a7d97c6cb4eb5abd8f6d3cf7a88cc241ca975d2e52a561ef" exitCode=0 Jan 26 11:10:28 crc kubenswrapper[4619]: I0126 11:10:28.761362 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b957r" event={"ID":"82faa0af-ddd1-4066-947b-8ffeae6e6896","Type":"ContainerDied","Data":"de27231775ea12f8a7d97c6cb4eb5abd8f6d3cf7a88cc241ca975d2e52a561ef"} Jan 26 11:10:28 crc kubenswrapper[4619]: I0126 11:10:28.905007 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zhptm"] Jan 26 11:10:29 crc kubenswrapper[4619]: I0126 11:10:29.263452 4619 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 11:10:29 crc kubenswrapper[4619]: I0126 11:10:29.287516 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ngtbq"] Jan 26 11:10:29 crc kubenswrapper[4619]: I0126 11:10:29.287905 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ngtbq" podUID="82722d3b-1952-40e3-87db-a9f4d4c9b83a" containerName="registry-server" containerID="cri-o://0e80e14accd9c3e30f1e0412a181f97d8598420e12d43fb926a4a8997640a46c" gracePeriod=2 Jan 26 11:10:29 crc kubenswrapper[4619]: I0126 11:10:29.660496 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-758868c854-h44rl" Jan 26 11:10:29 crc kubenswrapper[4619]: I0126 11:10:29.771655 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b957r" event={"ID":"82faa0af-ddd1-4066-947b-8ffeae6e6896","Type":"ContainerStarted","Data":"c5e02337ac21b11be843fc19003785cb321089a0af5155d21295ebe3147a469d"} Jan 26 11:10:29 crc kubenswrapper[4619]: I0126 11:10:29.795584 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b957r" podStartSLOduration=3.406934008 podStartE2EDuration="5.795568035s" podCreationTimestamp="2026-01-26 11:10:24 +0000 UTC" firstStartedPulling="2026-01-26 11:10:26.747263781 +0000 UTC m=+925.781304497" lastFinishedPulling="2026-01-26 11:10:29.135897808 +0000 UTC m=+928.169938524" observedRunningTime="2026-01-26 11:10:29.789935081 +0000 UTC m=+928.823975797" watchObservedRunningTime="2026-01-26 11:10:29.795568035 +0000 UTC m=+928.829608751" Jan 26 11:10:30 crc kubenswrapper[4619]: I0126 11:10:30.625197 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4kqf4" Jan 26 11:10:30 crc kubenswrapper[4619]: I0126 11:10:30.632072 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4kqf4" Jan 26 11:10:30 crc kubenswrapper[4619]: I0126 11:10:30.691074 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4kqf4" Jan 26 11:10:30 crc kubenswrapper[4619]: I0126 11:10:30.786099 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-r479p" event={"ID":"1200fb20-58ac-4e2b-aa47-d8e3bb34578b","Type":"ContainerStarted","Data":"27c6a808f2623e1743bcc850790e82ec3a3ebb11795fd83d1b12ade53273a02d"} Jan 26 11:10:30 crc kubenswrapper[4619]: I0126 11:10:30.786900 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-r479p" Jan 26 11:10:30 crc kubenswrapper[4619]: I0126 11:10:30.793299 4619 generic.go:334] "Generic (PLEG): container finished" podID="82722d3b-1952-40e3-87db-a9f4d4c9b83a" containerID="0e80e14accd9c3e30f1e0412a181f97d8598420e12d43fb926a4a8997640a46c" exitCode=0 Jan 26 11:10:30 crc kubenswrapper[4619]: I0126 11:10:30.793588 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngtbq" event={"ID":"82722d3b-1952-40e3-87db-a9f4d4c9b83a","Type":"ContainerDied","Data":"0e80e14accd9c3e30f1e0412a181f97d8598420e12d43fb926a4a8997640a46c"} Jan 26 11:10:30 crc kubenswrapper[4619]: I0126 11:10:30.834304 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-r479p" podStartSLOduration=3.664481603 podStartE2EDuration="57.834285261s" podCreationTimestamp="2026-01-26 11:09:33 +0000 UTC" firstStartedPulling="2026-01-26 11:09:36.087169145 +0000 UTC m=+875.121209861" lastFinishedPulling="2026-01-26 11:10:30.256972803 +0000 UTC m=+929.291013519" observedRunningTime="2026-01-26 11:10:30.802822747 +0000 UTC m=+929.836863463" watchObservedRunningTime="2026-01-26 11:10:30.834285261 +0000 UTC m=+929.868325977" Jan 26 11:10:30 crc kubenswrapper[4619]: I0126 11:10:30.861141 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4kqf4" Jan 26 11:10:30 crc kubenswrapper[4619]: I0126 11:10:30.866979 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ngtbq" Jan 26 11:10:31 crc kubenswrapper[4619]: I0126 11:10:31.017193 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82722d3b-1952-40e3-87db-a9f4d4c9b83a-utilities\") pod \"82722d3b-1952-40e3-87db-a9f4d4c9b83a\" (UID: \"82722d3b-1952-40e3-87db-a9f4d4c9b83a\") " Jan 26 11:10:31 crc kubenswrapper[4619]: I0126 11:10:31.017291 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8rfr\" (UniqueName: \"kubernetes.io/projected/82722d3b-1952-40e3-87db-a9f4d4c9b83a-kube-api-access-v8rfr\") pod \"82722d3b-1952-40e3-87db-a9f4d4c9b83a\" (UID: \"82722d3b-1952-40e3-87db-a9f4d4c9b83a\") " Jan 26 11:10:31 crc kubenswrapper[4619]: I0126 11:10:31.017358 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82722d3b-1952-40e3-87db-a9f4d4c9b83a-catalog-content\") pod \"82722d3b-1952-40e3-87db-a9f4d4c9b83a\" (UID: \"82722d3b-1952-40e3-87db-a9f4d4c9b83a\") " Jan 26 11:10:31 crc kubenswrapper[4619]: I0126 11:10:31.018385 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82722d3b-1952-40e3-87db-a9f4d4c9b83a-utilities" (OuterVolumeSpecName: "utilities") pod "82722d3b-1952-40e3-87db-a9f4d4c9b83a" (UID: "82722d3b-1952-40e3-87db-a9f4d4c9b83a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:10:31 crc kubenswrapper[4619]: I0126 11:10:31.041845 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82722d3b-1952-40e3-87db-a9f4d4c9b83a-kube-api-access-v8rfr" (OuterVolumeSpecName: "kube-api-access-v8rfr") pod "82722d3b-1952-40e3-87db-a9f4d4c9b83a" (UID: "82722d3b-1952-40e3-87db-a9f4d4c9b83a"). InnerVolumeSpecName "kube-api-access-v8rfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:10:31 crc kubenswrapper[4619]: I0126 11:10:31.108465 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82722d3b-1952-40e3-87db-a9f4d4c9b83a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82722d3b-1952-40e3-87db-a9f4d4c9b83a" (UID: "82722d3b-1952-40e3-87db-a9f4d4c9b83a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:10:31 crc kubenswrapper[4619]: I0126 11:10:31.118380 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82722d3b-1952-40e3-87db-a9f4d4c9b83a-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:10:31 crc kubenswrapper[4619]: I0126 11:10:31.118407 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8rfr\" (UniqueName: \"kubernetes.io/projected/82722d3b-1952-40e3-87db-a9f4d4c9b83a-kube-api-access-v8rfr\") on node \"crc\" DevicePath \"\"" Jan 26 11:10:31 crc kubenswrapper[4619]: I0126 11:10:31.118420 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82722d3b-1952-40e3-87db-a9f4d4c9b83a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:10:31 crc kubenswrapper[4619]: I0126 11:10:31.129213 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kzvfk" Jan 26 11:10:31 crc kubenswrapper[4619]: I0126 11:10:31.206129 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kzvfk" Jan 26 11:10:31 crc kubenswrapper[4619]: I0126 11:10:31.802248 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngtbq" event={"ID":"82722d3b-1952-40e3-87db-a9f4d4c9b83a","Type":"ContainerDied","Data":"7c46844b742c475cd0881512fdc3730fd0b0bb26ebda46aec12194d99d406e64"} Jan 26 11:10:31 crc kubenswrapper[4619]: I0126 11:10:31.802496 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ngtbq" Jan 26 11:10:31 crc kubenswrapper[4619]: I0126 11:10:31.802753 4619 scope.go:117] "RemoveContainer" containerID="0e80e14accd9c3e30f1e0412a181f97d8598420e12d43fb926a4a8997640a46c" Jan 26 11:10:31 crc kubenswrapper[4619]: I0126 11:10:31.829141 4619 scope.go:117] "RemoveContainer" containerID="f8917b0da83d0f5f8fa6ab9d7331f997c8f930365af5e60cb42e9978f7764155" Jan 26 11:10:31 crc kubenswrapper[4619]: I0126 11:10:31.845286 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ngtbq"] Jan 26 11:10:31 crc kubenswrapper[4619]: I0126 11:10:31.853951 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ngtbq"] Jan 26 11:10:31 crc kubenswrapper[4619]: I0126 11:10:31.865794 4619 scope.go:117] "RemoveContainer" containerID="475e2f03beeabed2c4d89eb8b2f75f613df0a990ea6e3c39aebb93df2d9699f3" Jan 26 11:10:33 crc kubenswrapper[4619]: I0126 11:10:33.269869 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82722d3b-1952-40e3-87db-a9f4d4c9b83a" path="/var/lib/kubelet/pods/82722d3b-1952-40e3-87db-a9f4d4c9b83a/volumes" Jan 26 11:10:33 crc kubenswrapper[4619]: I0126 11:10:33.671968 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kzvfk"] Jan 26 11:10:33 crc kubenswrapper[4619]: I0126 11:10:33.672369 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kzvfk" podUID="2ceffce6-7a1a-4ae6-9cfb-56ca067f359b" containerName="registry-server" containerID="cri-o://e841bc225cb2976db03525cbf773aad7fa33fe6582314db7ba1bcecade744037" gracePeriod=2 Jan 26 11:10:33 crc kubenswrapper[4619]: I0126 11:10:33.826768 4619 generic.go:334] "Generic (PLEG): container finished" podID="2ceffce6-7a1a-4ae6-9cfb-56ca067f359b" containerID="e841bc225cb2976db03525cbf773aad7fa33fe6582314db7ba1bcecade744037" exitCode=0 Jan 26 11:10:33 crc kubenswrapper[4619]: I0126 11:10:33.826969 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kzvfk" event={"ID":"2ceffce6-7a1a-4ae6-9cfb-56ca067f359b","Type":"ContainerDied","Data":"e841bc225cb2976db03525cbf773aad7fa33fe6582314db7ba1bcecade744037"} Jan 26 11:10:34 crc kubenswrapper[4619]: I0126 11:10:34.060006 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kzvfk" Jan 26 11:10:34 crc kubenswrapper[4619]: I0126 11:10:34.113727 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fzrxh" Jan 26 11:10:34 crc kubenswrapper[4619]: I0126 11:10:34.164561 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ceffce6-7a1a-4ae6-9cfb-56ca067f359b-catalog-content\") pod \"2ceffce6-7a1a-4ae6-9cfb-56ca067f359b\" (UID: \"2ceffce6-7a1a-4ae6-9cfb-56ca067f359b\") " Jan 26 11:10:34 crc kubenswrapper[4619]: I0126 11:10:34.164694 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ceffce6-7a1a-4ae6-9cfb-56ca067f359b-utilities\") pod \"2ceffce6-7a1a-4ae6-9cfb-56ca067f359b\" (UID: \"2ceffce6-7a1a-4ae6-9cfb-56ca067f359b\") " Jan 26 11:10:34 crc kubenswrapper[4619]: I0126 11:10:34.164730 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mslc\" (UniqueName: \"kubernetes.io/projected/2ceffce6-7a1a-4ae6-9cfb-56ca067f359b-kube-api-access-2mslc\") pod \"2ceffce6-7a1a-4ae6-9cfb-56ca067f359b\" (UID: \"2ceffce6-7a1a-4ae6-9cfb-56ca067f359b\") " Jan 26 11:10:34 crc kubenswrapper[4619]: I0126 11:10:34.166803 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ceffce6-7a1a-4ae6-9cfb-56ca067f359b-utilities" (OuterVolumeSpecName: "utilities") pod "2ceffce6-7a1a-4ae6-9cfb-56ca067f359b" (UID: "2ceffce6-7a1a-4ae6-9cfb-56ca067f359b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:10:34 crc kubenswrapper[4619]: I0126 11:10:34.186465 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ceffce6-7a1a-4ae6-9cfb-56ca067f359b-kube-api-access-2mslc" (OuterVolumeSpecName: "kube-api-access-2mslc") pod "2ceffce6-7a1a-4ae6-9cfb-56ca067f359b" (UID: "2ceffce6-7a1a-4ae6-9cfb-56ca067f359b"). InnerVolumeSpecName "kube-api-access-2mslc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:10:34 crc kubenswrapper[4619]: I0126 11:10:34.266550 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ceffce6-7a1a-4ae6-9cfb-56ca067f359b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:10:34 crc kubenswrapper[4619]: I0126 11:10:34.266585 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mslc\" (UniqueName: \"kubernetes.io/projected/2ceffce6-7a1a-4ae6-9cfb-56ca067f359b-kube-api-access-2mslc\") on node \"crc\" DevicePath \"\"" Jan 26 11:10:34 crc kubenswrapper[4619]: I0126 11:10:34.284644 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ceffce6-7a1a-4ae6-9cfb-56ca067f359b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ceffce6-7a1a-4ae6-9cfb-56ca067f359b" (UID: "2ceffce6-7a1a-4ae6-9cfb-56ca067f359b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:10:34 crc kubenswrapper[4619]: I0126 11:10:34.368757 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ceffce6-7a1a-4ae6-9cfb-56ca067f359b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:10:34 crc kubenswrapper[4619]: I0126 11:10:34.835384 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kzvfk" event={"ID":"2ceffce6-7a1a-4ae6-9cfb-56ca067f359b","Type":"ContainerDied","Data":"74566f7b56806a49335a3b6a01fa48007b8a793faba280c982be70adbf2108ef"} Jan 26 11:10:34 crc kubenswrapper[4619]: I0126 11:10:34.835876 4619 scope.go:117] "RemoveContainer" containerID="e841bc225cb2976db03525cbf773aad7fa33fe6582314db7ba1bcecade744037" Jan 26 11:10:34 crc kubenswrapper[4619]: I0126 11:10:34.836037 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kzvfk" Jan 26 11:10:34 crc kubenswrapper[4619]: I0126 11:10:34.867247 4619 scope.go:117] "RemoveContainer" containerID="5747e68df6473302f11bee01ec25fa7667714fb3706ef2d27b249af176d86edf" Jan 26 11:10:34 crc kubenswrapper[4619]: I0126 11:10:34.871468 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kzvfk"] Jan 26 11:10:34 crc kubenswrapper[4619]: I0126 11:10:34.887198 4619 scope.go:117] "RemoveContainer" containerID="04fa609b83625bd135c5913c0d79f9e1b99b642cc6c8e1be328cc37d7741eacc" Jan 26 11:10:34 crc kubenswrapper[4619]: I0126 11:10:34.889933 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kzvfk"] Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.076802 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kqf4"] Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.077073 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4kqf4" podUID="2212929e-05ed-45d7-a9b8-2c9c15fc5ca0" containerName="registry-server" containerID="cri-o://b2884332ded261f370dd42ca6d44293788c2abdae1e5f8ca34e08dddd0368d08" gracePeriod=2 Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.223024 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b957r" Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.224168 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b957r" Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.270419 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ceffce6-7a1a-4ae6-9cfb-56ca067f359b" path="/var/lib/kubelet/pods/2ceffce6-7a1a-4ae6-9cfb-56ca067f359b/volumes" Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.275127 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b957r" Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.496650 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kqf4" Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.587510 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j27n6\" (UniqueName: \"kubernetes.io/projected/2212929e-05ed-45d7-a9b8-2c9c15fc5ca0-kube-api-access-j27n6\") pod \"2212929e-05ed-45d7-a9b8-2c9c15fc5ca0\" (UID: \"2212929e-05ed-45d7-a9b8-2c9c15fc5ca0\") " Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.587649 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2212929e-05ed-45d7-a9b8-2c9c15fc5ca0-utilities\") pod \"2212929e-05ed-45d7-a9b8-2c9c15fc5ca0\" (UID: \"2212929e-05ed-45d7-a9b8-2c9c15fc5ca0\") " Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.587672 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2212929e-05ed-45d7-a9b8-2c9c15fc5ca0-catalog-content\") pod \"2212929e-05ed-45d7-a9b8-2c9c15fc5ca0\" (UID: \"2212929e-05ed-45d7-a9b8-2c9c15fc5ca0\") " Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.593213 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2212929e-05ed-45d7-a9b8-2c9c15fc5ca0-kube-api-access-j27n6" (OuterVolumeSpecName: "kube-api-access-j27n6") pod "2212929e-05ed-45d7-a9b8-2c9c15fc5ca0" (UID: "2212929e-05ed-45d7-a9b8-2c9c15fc5ca0"). InnerVolumeSpecName "kube-api-access-j27n6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.594347 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2212929e-05ed-45d7-a9b8-2c9c15fc5ca0-utilities" (OuterVolumeSpecName: "utilities") pod "2212929e-05ed-45d7-a9b8-2c9c15fc5ca0" (UID: "2212929e-05ed-45d7-a9b8-2c9c15fc5ca0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.595226 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2212929e-05ed-45d7-a9b8-2c9c15fc5ca0-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.595257 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j27n6\" (UniqueName: \"kubernetes.io/projected/2212929e-05ed-45d7-a9b8-2c9c15fc5ca0-kube-api-access-j27n6\") on node \"crc\" DevicePath \"\"" Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.606302 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2212929e-05ed-45d7-a9b8-2c9c15fc5ca0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2212929e-05ed-45d7-a9b8-2c9c15fc5ca0" (UID: "2212929e-05ed-45d7-a9b8-2c9c15fc5ca0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.696399 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2212929e-05ed-45d7-a9b8-2c9c15fc5ca0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.844653 4619 generic.go:334] "Generic (PLEG): container finished" podID="2212929e-05ed-45d7-a9b8-2c9c15fc5ca0" containerID="b2884332ded261f370dd42ca6d44293788c2abdae1e5f8ca34e08dddd0368d08" exitCode=0 Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.844722 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kqf4" event={"ID":"2212929e-05ed-45d7-a9b8-2c9c15fc5ca0","Type":"ContainerDied","Data":"b2884332ded261f370dd42ca6d44293788c2abdae1e5f8ca34e08dddd0368d08"} Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.844939 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kqf4" event={"ID":"2212929e-05ed-45d7-a9b8-2c9c15fc5ca0","Type":"ContainerDied","Data":"4fb331bb9104dd11dd6e30909568e5c3d713166c513c07337264b520b7776cc2"} Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.844966 4619 scope.go:117] "RemoveContainer" containerID="b2884332ded261f370dd42ca6d44293788c2abdae1e5f8ca34e08dddd0368d08" Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.844757 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kqf4" Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.864168 4619 scope.go:117] "RemoveContainer" containerID="63e770f55e08bd8a09bee069d96c7e69e4a487d443727970757bf07aa2352c55" Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.898777 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kqf4"] Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.920130 4619 scope.go:117] "RemoveContainer" containerID="eeba73bec92e96a9450ae3c0d833ef33a63183552350ff706e86df4561a475b5" Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.922825 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kqf4"] Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.931269 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b957r" Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.958871 4619 scope.go:117] "RemoveContainer" containerID="b2884332ded261f370dd42ca6d44293788c2abdae1e5f8ca34e08dddd0368d08" Jan 26 11:10:35 crc kubenswrapper[4619]: E0126 11:10:35.959641 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2884332ded261f370dd42ca6d44293788c2abdae1e5f8ca34e08dddd0368d08\": container with ID starting with b2884332ded261f370dd42ca6d44293788c2abdae1e5f8ca34e08dddd0368d08 not found: ID does not exist" containerID="b2884332ded261f370dd42ca6d44293788c2abdae1e5f8ca34e08dddd0368d08" Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.959695 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2884332ded261f370dd42ca6d44293788c2abdae1e5f8ca34e08dddd0368d08"} err="failed to get container status \"b2884332ded261f370dd42ca6d44293788c2abdae1e5f8ca34e08dddd0368d08\": rpc error: code = NotFound desc = could not find container \"b2884332ded261f370dd42ca6d44293788c2abdae1e5f8ca34e08dddd0368d08\": container with ID starting with b2884332ded261f370dd42ca6d44293788c2abdae1e5f8ca34e08dddd0368d08 not found: ID does not exist" Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.959727 4619 scope.go:117] "RemoveContainer" containerID="63e770f55e08bd8a09bee069d96c7e69e4a487d443727970757bf07aa2352c55" Jan 26 11:10:35 crc kubenswrapper[4619]: E0126 11:10:35.960276 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63e770f55e08bd8a09bee069d96c7e69e4a487d443727970757bf07aa2352c55\": container with ID starting with 63e770f55e08bd8a09bee069d96c7e69e4a487d443727970757bf07aa2352c55 not found: ID does not exist" containerID="63e770f55e08bd8a09bee069d96c7e69e4a487d443727970757bf07aa2352c55" Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.960324 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63e770f55e08bd8a09bee069d96c7e69e4a487d443727970757bf07aa2352c55"} err="failed to get container status \"63e770f55e08bd8a09bee069d96c7e69e4a487d443727970757bf07aa2352c55\": rpc error: code = NotFound desc = could not find container \"63e770f55e08bd8a09bee069d96c7e69e4a487d443727970757bf07aa2352c55\": container with ID starting with 63e770f55e08bd8a09bee069d96c7e69e4a487d443727970757bf07aa2352c55 not found: ID does not exist" Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.960357 4619 scope.go:117] "RemoveContainer" containerID="eeba73bec92e96a9450ae3c0d833ef33a63183552350ff706e86df4561a475b5" Jan 26 11:10:35 crc kubenswrapper[4619]: E0126 11:10:35.961229 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeba73bec92e96a9450ae3c0d833ef33a63183552350ff706e86df4561a475b5\": container with ID starting with eeba73bec92e96a9450ae3c0d833ef33a63183552350ff706e86df4561a475b5 not found: ID does not exist" containerID="eeba73bec92e96a9450ae3c0d833ef33a63183552350ff706e86df4561a475b5" Jan 26 11:10:35 crc kubenswrapper[4619]: I0126 11:10:35.961283 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeba73bec92e96a9450ae3c0d833ef33a63183552350ff706e86df4561a475b5"} err="failed to get container status \"eeba73bec92e96a9450ae3c0d833ef33a63183552350ff706e86df4561a475b5\": rpc error: code = NotFound desc = could not find container \"eeba73bec92e96a9450ae3c0d833ef33a63183552350ff706e86df4561a475b5\": container with ID starting with eeba73bec92e96a9450ae3c0d833ef33a63183552350ff706e86df4561a475b5 not found: ID does not exist" Jan 26 11:10:37 crc kubenswrapper[4619]: I0126 11:10:37.268532 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2212929e-05ed-45d7-a9b8-2c9c15fc5ca0" path="/var/lib/kubelet/pods/2212929e-05ed-45d7-a9b8-2c9c15fc5ca0/volumes" Jan 26 11:10:37 crc kubenswrapper[4619]: I0126 11:10:37.474827 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b957r"] Jan 26 11:10:38 crc kubenswrapper[4619]: I0126 11:10:38.874150 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b957r" podUID="82faa0af-ddd1-4066-947b-8ffeae6e6896" containerName="registry-server" containerID="cri-o://c5e02337ac21b11be843fc19003785cb321089a0af5155d21295ebe3147a469d" gracePeriod=2 Jan 26 11:10:39 crc kubenswrapper[4619]: I0126 11:10:39.377125 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b957r" Jan 26 11:10:39 crc kubenswrapper[4619]: I0126 11:10:39.460594 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82faa0af-ddd1-4066-947b-8ffeae6e6896-catalog-content\") pod \"82faa0af-ddd1-4066-947b-8ffeae6e6896\" (UID: \"82faa0af-ddd1-4066-947b-8ffeae6e6896\") " Jan 26 11:10:39 crc kubenswrapper[4619]: I0126 11:10:39.460648 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82faa0af-ddd1-4066-947b-8ffeae6e6896-utilities\") pod \"82faa0af-ddd1-4066-947b-8ffeae6e6896\" (UID: \"82faa0af-ddd1-4066-947b-8ffeae6e6896\") " Jan 26 11:10:39 crc kubenswrapper[4619]: I0126 11:10:39.460816 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzcgj\" (UniqueName: \"kubernetes.io/projected/82faa0af-ddd1-4066-947b-8ffeae6e6896-kube-api-access-lzcgj\") pod \"82faa0af-ddd1-4066-947b-8ffeae6e6896\" (UID: \"82faa0af-ddd1-4066-947b-8ffeae6e6896\") " Jan 26 11:10:39 crc kubenswrapper[4619]: I0126 11:10:39.462026 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82faa0af-ddd1-4066-947b-8ffeae6e6896-utilities" (OuterVolumeSpecName: "utilities") pod "82faa0af-ddd1-4066-947b-8ffeae6e6896" (UID: "82faa0af-ddd1-4066-947b-8ffeae6e6896"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:10:39 crc kubenswrapper[4619]: I0126 11:10:39.470105 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82faa0af-ddd1-4066-947b-8ffeae6e6896-kube-api-access-lzcgj" (OuterVolumeSpecName: "kube-api-access-lzcgj") pod "82faa0af-ddd1-4066-947b-8ffeae6e6896" (UID: "82faa0af-ddd1-4066-947b-8ffeae6e6896"). InnerVolumeSpecName "kube-api-access-lzcgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:10:39 crc kubenswrapper[4619]: I0126 11:10:39.512340 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82faa0af-ddd1-4066-947b-8ffeae6e6896-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82faa0af-ddd1-4066-947b-8ffeae6e6896" (UID: "82faa0af-ddd1-4066-947b-8ffeae6e6896"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:10:39 crc kubenswrapper[4619]: I0126 11:10:39.562523 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzcgj\" (UniqueName: \"kubernetes.io/projected/82faa0af-ddd1-4066-947b-8ffeae6e6896-kube-api-access-lzcgj\") on node \"crc\" DevicePath \"\"" Jan 26 11:10:39 crc kubenswrapper[4619]: I0126 11:10:39.562560 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82faa0af-ddd1-4066-947b-8ffeae6e6896-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:10:39 crc kubenswrapper[4619]: I0126 11:10:39.562571 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82faa0af-ddd1-4066-947b-8ffeae6e6896-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:10:39 crc kubenswrapper[4619]: I0126 11:10:39.884882 4619 generic.go:334] "Generic (PLEG): container finished" podID="82faa0af-ddd1-4066-947b-8ffeae6e6896" containerID="c5e02337ac21b11be843fc19003785cb321089a0af5155d21295ebe3147a469d" exitCode=0 Jan 26 11:10:39 crc kubenswrapper[4619]: I0126 11:10:39.885007 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b957r" Jan 26 11:10:39 crc kubenswrapper[4619]: I0126 11:10:39.886407 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b957r" event={"ID":"82faa0af-ddd1-4066-947b-8ffeae6e6896","Type":"ContainerDied","Data":"c5e02337ac21b11be843fc19003785cb321089a0af5155d21295ebe3147a469d"} Jan 26 11:10:39 crc kubenswrapper[4619]: I0126 11:10:39.886540 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b957r" event={"ID":"82faa0af-ddd1-4066-947b-8ffeae6e6896","Type":"ContainerDied","Data":"3d7886650b77336c9836531ed936c800031b0415360aec62344097b3296583f6"} Jan 26 11:10:39 crc kubenswrapper[4619]: I0126 11:10:39.886609 4619 scope.go:117] "RemoveContainer" containerID="c5e02337ac21b11be843fc19003785cb321089a0af5155d21295ebe3147a469d" Jan 26 11:10:39 crc kubenswrapper[4619]: I0126 11:10:39.918948 4619 scope.go:117] "RemoveContainer" containerID="de27231775ea12f8a7d97c6cb4eb5abd8f6d3cf7a88cc241ca975d2e52a561ef" Jan 26 11:10:39 crc kubenswrapper[4619]: I0126 11:10:39.935266 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b957r"] Jan 26 11:10:39 crc kubenswrapper[4619]: I0126 11:10:39.941532 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b957r"] Jan 26 11:10:39 crc kubenswrapper[4619]: I0126 11:10:39.963385 4619 scope.go:117] "RemoveContainer" containerID="ff06ff1ae345012ab957081e0f53a70e48430350096598875d2f148956ab0a7f" Jan 26 11:10:39 crc kubenswrapper[4619]: I0126 11:10:39.985482 4619 scope.go:117] "RemoveContainer" containerID="c5e02337ac21b11be843fc19003785cb321089a0af5155d21295ebe3147a469d" Jan 26 11:10:39 crc kubenswrapper[4619]: E0126 11:10:39.986741 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5e02337ac21b11be843fc19003785cb321089a0af5155d21295ebe3147a469d\": container with ID starting with c5e02337ac21b11be843fc19003785cb321089a0af5155d21295ebe3147a469d not found: ID does not exist" containerID="c5e02337ac21b11be843fc19003785cb321089a0af5155d21295ebe3147a469d" Jan 26 11:10:39 crc kubenswrapper[4619]: I0126 11:10:39.986784 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5e02337ac21b11be843fc19003785cb321089a0af5155d21295ebe3147a469d"} err="failed to get container status \"c5e02337ac21b11be843fc19003785cb321089a0af5155d21295ebe3147a469d\": rpc error: code = NotFound desc = could not find container \"c5e02337ac21b11be843fc19003785cb321089a0af5155d21295ebe3147a469d\": container with ID starting with c5e02337ac21b11be843fc19003785cb321089a0af5155d21295ebe3147a469d not found: ID does not exist" Jan 26 11:10:39 crc kubenswrapper[4619]: I0126 11:10:39.986813 4619 scope.go:117] "RemoveContainer" containerID="de27231775ea12f8a7d97c6cb4eb5abd8f6d3cf7a88cc241ca975d2e52a561ef" Jan 26 11:10:39 crc kubenswrapper[4619]: E0126 11:10:39.987232 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de27231775ea12f8a7d97c6cb4eb5abd8f6d3cf7a88cc241ca975d2e52a561ef\": container with ID starting with de27231775ea12f8a7d97c6cb4eb5abd8f6d3cf7a88cc241ca975d2e52a561ef not found: ID does not exist" containerID="de27231775ea12f8a7d97c6cb4eb5abd8f6d3cf7a88cc241ca975d2e52a561ef" Jan 26 11:10:39 crc kubenswrapper[4619]: I0126 11:10:39.987260 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de27231775ea12f8a7d97c6cb4eb5abd8f6d3cf7a88cc241ca975d2e52a561ef"} err="failed to get container status \"de27231775ea12f8a7d97c6cb4eb5abd8f6d3cf7a88cc241ca975d2e52a561ef\": rpc error: code = NotFound desc = could not find container \"de27231775ea12f8a7d97c6cb4eb5abd8f6d3cf7a88cc241ca975d2e52a561ef\": container with ID starting with de27231775ea12f8a7d97c6cb4eb5abd8f6d3cf7a88cc241ca975d2e52a561ef not found: ID does not exist" Jan 26 11:10:39 crc kubenswrapper[4619]: I0126 11:10:39.987278 4619 scope.go:117] "RemoveContainer" containerID="ff06ff1ae345012ab957081e0f53a70e48430350096598875d2f148956ab0a7f" Jan 26 11:10:39 crc kubenswrapper[4619]: E0126 11:10:39.987868 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff06ff1ae345012ab957081e0f53a70e48430350096598875d2f148956ab0a7f\": container with ID starting with ff06ff1ae345012ab957081e0f53a70e48430350096598875d2f148956ab0a7f not found: ID does not exist" containerID="ff06ff1ae345012ab957081e0f53a70e48430350096598875d2f148956ab0a7f" Jan 26 11:10:39 crc kubenswrapper[4619]: I0126 11:10:39.987895 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff06ff1ae345012ab957081e0f53a70e48430350096598875d2f148956ab0a7f"} err="failed to get container status \"ff06ff1ae345012ab957081e0f53a70e48430350096598875d2f148956ab0a7f\": rpc error: code = NotFound desc = could not find container \"ff06ff1ae345012ab957081e0f53a70e48430350096598875d2f148956ab0a7f\": container with ID starting with ff06ff1ae345012ab957081e0f53a70e48430350096598875d2f148956ab0a7f not found: ID does not exist" Jan 26 11:10:41 crc kubenswrapper[4619]: I0126 11:10:41.273335 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82faa0af-ddd1-4066-947b-8ffeae6e6896" path="/var/lib/kubelet/pods/82faa0af-ddd1-4066-947b-8ffeae6e6896/volumes" Jan 26 11:10:44 crc kubenswrapper[4619]: I0126 11:10:44.234046 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:10:44 crc kubenswrapper[4619]: I0126 11:10:44.234453 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:10:44 crc kubenswrapper[4619]: I0126 11:10:44.608700 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-r479p" Jan 26 11:11:01 crc kubenswrapper[4619]: I0126 11:11:01.879369 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-24r9j"] Jan 26 11:11:01 crc kubenswrapper[4619]: E0126 11:11:01.880888 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ceffce6-7a1a-4ae6-9cfb-56ca067f359b" containerName="registry-server" Jan 26 11:11:01 crc kubenswrapper[4619]: I0126 11:11:01.880902 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ceffce6-7a1a-4ae6-9cfb-56ca067f359b" containerName="registry-server" Jan 26 11:11:01 crc kubenswrapper[4619]: E0126 11:11:01.880910 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82722d3b-1952-40e3-87db-a9f4d4c9b83a" containerName="extract-utilities" Jan 26 11:11:01 crc kubenswrapper[4619]: I0126 11:11:01.880918 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="82722d3b-1952-40e3-87db-a9f4d4c9b83a" containerName="extract-utilities" Jan 26 11:11:01 crc kubenswrapper[4619]: E0126 11:11:01.880928 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82722d3b-1952-40e3-87db-a9f4d4c9b83a" containerName="registry-server" Jan 26 11:11:01 crc kubenswrapper[4619]: I0126 11:11:01.880933 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="82722d3b-1952-40e3-87db-a9f4d4c9b83a" containerName="registry-server" Jan 26 11:11:01 crc kubenswrapper[4619]: E0126 11:11:01.880943 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ceffce6-7a1a-4ae6-9cfb-56ca067f359b" containerName="extract-utilities" Jan 26 11:11:01 crc kubenswrapper[4619]: I0126 11:11:01.880949 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ceffce6-7a1a-4ae6-9cfb-56ca067f359b" containerName="extract-utilities" Jan 26 11:11:01 crc kubenswrapper[4619]: E0126 11:11:01.880960 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2212929e-05ed-45d7-a9b8-2c9c15fc5ca0" containerName="registry-server" Jan 26 11:11:01 crc kubenswrapper[4619]: I0126 11:11:01.880966 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="2212929e-05ed-45d7-a9b8-2c9c15fc5ca0" containerName="registry-server" Jan 26 11:11:01 crc kubenswrapper[4619]: E0126 11:11:01.880974 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2212929e-05ed-45d7-a9b8-2c9c15fc5ca0" containerName="extract-content" Jan 26 11:11:01 crc kubenswrapper[4619]: I0126 11:11:01.880980 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="2212929e-05ed-45d7-a9b8-2c9c15fc5ca0" containerName="extract-content" Jan 26 11:11:01 crc kubenswrapper[4619]: E0126 11:11:01.880991 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ceffce6-7a1a-4ae6-9cfb-56ca067f359b" containerName="extract-content" Jan 26 11:11:01 crc kubenswrapper[4619]: I0126 11:11:01.880996 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ceffce6-7a1a-4ae6-9cfb-56ca067f359b" containerName="extract-content" Jan 26 11:11:01 crc kubenswrapper[4619]: E0126 11:11:01.881007 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82722d3b-1952-40e3-87db-a9f4d4c9b83a" containerName="extract-content" Jan 26 11:11:01 crc kubenswrapper[4619]: I0126 11:11:01.881012 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="82722d3b-1952-40e3-87db-a9f4d4c9b83a" containerName="extract-content" Jan 26 11:11:01 crc kubenswrapper[4619]: E0126 11:11:01.881019 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2212929e-05ed-45d7-a9b8-2c9c15fc5ca0" containerName="extract-utilities" Jan 26 11:11:01 crc kubenswrapper[4619]: I0126 11:11:01.881025 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="2212929e-05ed-45d7-a9b8-2c9c15fc5ca0" containerName="extract-utilities" Jan 26 11:11:01 crc kubenswrapper[4619]: E0126 11:11:01.881035 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82faa0af-ddd1-4066-947b-8ffeae6e6896" containerName="extract-content" Jan 26 11:11:01 crc kubenswrapper[4619]: I0126 11:11:01.881042 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="82faa0af-ddd1-4066-947b-8ffeae6e6896" containerName="extract-content" Jan 26 11:11:01 crc kubenswrapper[4619]: E0126 11:11:01.881054 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82faa0af-ddd1-4066-947b-8ffeae6e6896" containerName="extract-utilities" Jan 26 11:11:01 crc kubenswrapper[4619]: I0126 11:11:01.881060 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="82faa0af-ddd1-4066-947b-8ffeae6e6896" containerName="extract-utilities" Jan 26 11:11:01 crc kubenswrapper[4619]: E0126 11:11:01.881069 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82faa0af-ddd1-4066-947b-8ffeae6e6896" containerName="registry-server" Jan 26 11:11:01 crc kubenswrapper[4619]: I0126 11:11:01.881074 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="82faa0af-ddd1-4066-947b-8ffeae6e6896" containerName="registry-server" Jan 26 11:11:01 crc kubenswrapper[4619]: I0126 11:11:01.881181 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="82722d3b-1952-40e3-87db-a9f4d4c9b83a" containerName="registry-server" Jan 26 11:11:01 crc kubenswrapper[4619]: I0126 11:11:01.881194 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="82faa0af-ddd1-4066-947b-8ffeae6e6896" containerName="registry-server" Jan 26 11:11:01 crc kubenswrapper[4619]: I0126 11:11:01.881202 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="2212929e-05ed-45d7-a9b8-2c9c15fc5ca0" containerName="registry-server" Jan 26 11:11:01 crc kubenswrapper[4619]: I0126 11:11:01.881208 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ceffce6-7a1a-4ae6-9cfb-56ca067f359b" containerName="registry-server" Jan 26 11:11:01 crc kubenswrapper[4619]: I0126 11:11:01.881959 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-24r9j" Jan 26 11:11:01 crc kubenswrapper[4619]: I0126 11:11:01.885067 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 26 11:11:01 crc kubenswrapper[4619]: I0126 11:11:01.885220 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 26 11:11:01 crc kubenswrapper[4619]: I0126 11:11:01.885399 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 26 11:11:01 crc kubenswrapper[4619]: I0126 11:11:01.885532 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-mzrsx" Jan 26 11:11:01 crc kubenswrapper[4619]: I0126 11:11:01.898542 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-24r9j"] Jan 26 11:11:01 crc kubenswrapper[4619]: I0126 11:11:01.945708 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-bpxpr"] Jan 26 11:11:01 crc kubenswrapper[4619]: I0126 11:11:01.946990 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-bpxpr" Jan 26 11:11:01 crc kubenswrapper[4619]: I0126 11:11:01.954236 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 26 11:11:01 crc kubenswrapper[4619]: I0126 11:11:01.959529 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-bpxpr"] Jan 26 11:11:01 crc kubenswrapper[4619]: I0126 11:11:01.994948 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a8c3517-34b1-4844-847a-5da9969a71b3-config\") pod \"dnsmasq-dns-675f4bcbfc-24r9j\" (UID: \"8a8c3517-34b1-4844-847a-5da9969a71b3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-24r9j" Jan 26 11:11:01 crc kubenswrapper[4619]: I0126 11:11:01.995034 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh54k\" (UniqueName: \"kubernetes.io/projected/8a8c3517-34b1-4844-847a-5da9969a71b3-kube-api-access-kh54k\") pod \"dnsmasq-dns-675f4bcbfc-24r9j\" (UID: \"8a8c3517-34b1-4844-847a-5da9969a71b3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-24r9j" Jan 26 11:11:02 crc kubenswrapper[4619]: I0126 11:11:02.096163 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh54k\" (UniqueName: \"kubernetes.io/projected/8a8c3517-34b1-4844-847a-5da9969a71b3-kube-api-access-kh54k\") pod \"dnsmasq-dns-675f4bcbfc-24r9j\" (UID: \"8a8c3517-34b1-4844-847a-5da9969a71b3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-24r9j" Jan 26 11:11:02 crc kubenswrapper[4619]: I0126 11:11:02.096221 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0545a0e4-aa91-4ac3-b971-262dfbb92d9e-config\") pod \"dnsmasq-dns-78dd6ddcc-bpxpr\" (UID: \"0545a0e4-aa91-4ac3-b971-262dfbb92d9e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-bpxpr" Jan 26 11:11:02 crc kubenswrapper[4619]: I0126 11:11:02.096257 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0545a0e4-aa91-4ac3-b971-262dfbb92d9e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-bpxpr\" (UID: \"0545a0e4-aa91-4ac3-b971-262dfbb92d9e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-bpxpr" Jan 26 11:11:02 crc kubenswrapper[4619]: I0126 11:11:02.096296 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9gs4\" (UniqueName: \"kubernetes.io/projected/0545a0e4-aa91-4ac3-b971-262dfbb92d9e-kube-api-access-b9gs4\") pod \"dnsmasq-dns-78dd6ddcc-bpxpr\" (UID: \"0545a0e4-aa91-4ac3-b971-262dfbb92d9e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-bpxpr" Jan 26 11:11:02 crc kubenswrapper[4619]: I0126 11:11:02.096318 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a8c3517-34b1-4844-847a-5da9969a71b3-config\") pod \"dnsmasq-dns-675f4bcbfc-24r9j\" (UID: \"8a8c3517-34b1-4844-847a-5da9969a71b3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-24r9j" Jan 26 11:11:02 crc kubenswrapper[4619]: I0126 11:11:02.097185 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a8c3517-34b1-4844-847a-5da9969a71b3-config\") pod \"dnsmasq-dns-675f4bcbfc-24r9j\" (UID: \"8a8c3517-34b1-4844-847a-5da9969a71b3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-24r9j" Jan 26 11:11:02 crc kubenswrapper[4619]: I0126 11:11:02.125408 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh54k\" (UniqueName: \"kubernetes.io/projected/8a8c3517-34b1-4844-847a-5da9969a71b3-kube-api-access-kh54k\") pod \"dnsmasq-dns-675f4bcbfc-24r9j\" (UID: \"8a8c3517-34b1-4844-847a-5da9969a71b3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-24r9j" Jan 26 11:11:02 crc kubenswrapper[4619]: I0126 11:11:02.197501 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0545a0e4-aa91-4ac3-b971-262dfbb92d9e-config\") pod \"dnsmasq-dns-78dd6ddcc-bpxpr\" (UID: \"0545a0e4-aa91-4ac3-b971-262dfbb92d9e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-bpxpr" Jan 26 11:11:02 crc kubenswrapper[4619]: I0126 11:11:02.197553 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0545a0e4-aa91-4ac3-b971-262dfbb92d9e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-bpxpr\" (UID: \"0545a0e4-aa91-4ac3-b971-262dfbb92d9e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-bpxpr" Jan 26 11:11:02 crc kubenswrapper[4619]: I0126 11:11:02.197597 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9gs4\" (UniqueName: \"kubernetes.io/projected/0545a0e4-aa91-4ac3-b971-262dfbb92d9e-kube-api-access-b9gs4\") pod \"dnsmasq-dns-78dd6ddcc-bpxpr\" (UID: \"0545a0e4-aa91-4ac3-b971-262dfbb92d9e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-bpxpr" Jan 26 11:11:02 crc kubenswrapper[4619]: I0126 11:11:02.198545 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0545a0e4-aa91-4ac3-b971-262dfbb92d9e-config\") pod \"dnsmasq-dns-78dd6ddcc-bpxpr\" (UID: \"0545a0e4-aa91-4ac3-b971-262dfbb92d9e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-bpxpr" Jan 26 11:11:02 crc kubenswrapper[4619]: I0126 11:11:02.199026 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0545a0e4-aa91-4ac3-b971-262dfbb92d9e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-bpxpr\" (UID: \"0545a0e4-aa91-4ac3-b971-262dfbb92d9e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-bpxpr" Jan 26 11:11:02 crc kubenswrapper[4619]: I0126 11:11:02.202367 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-24r9j" Jan 26 11:11:02 crc kubenswrapper[4619]: I0126 11:11:02.214300 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9gs4\" (UniqueName: \"kubernetes.io/projected/0545a0e4-aa91-4ac3-b971-262dfbb92d9e-kube-api-access-b9gs4\") pod \"dnsmasq-dns-78dd6ddcc-bpxpr\" (UID: \"0545a0e4-aa91-4ac3-b971-262dfbb92d9e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-bpxpr" Jan 26 11:11:02 crc kubenswrapper[4619]: I0126 11:11:02.261010 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-bpxpr" Jan 26 11:11:02 crc kubenswrapper[4619]: I0126 11:11:02.675824 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-24r9j"] Jan 26 11:11:02 crc kubenswrapper[4619]: I0126 11:11:02.733708 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-bpxpr"] Jan 26 11:11:02 crc kubenswrapper[4619]: W0126 11:11:02.740443 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0545a0e4_aa91_4ac3_b971_262dfbb92d9e.slice/crio-2c1d7cb2070e99f53eda7eecdba61f2cb2f2db390e3c6fa4cc5637abb76f845a WatchSource:0}: Error finding container 2c1d7cb2070e99f53eda7eecdba61f2cb2f2db390e3c6fa4cc5637abb76f845a: Status 404 returned error can't find the container with id 2c1d7cb2070e99f53eda7eecdba61f2cb2f2db390e3c6fa4cc5637abb76f845a Jan 26 11:11:03 crc kubenswrapper[4619]: I0126 11:11:03.050655 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-24r9j" event={"ID":"8a8c3517-34b1-4844-847a-5da9969a71b3","Type":"ContainerStarted","Data":"66bd3980318c91649226ff2c9be200976ddd96b4867b00d0e1197f8456a48f94"} Jan 26 11:11:03 crc kubenswrapper[4619]: I0126 11:11:03.052508 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-bpxpr" event={"ID":"0545a0e4-aa91-4ac3-b971-262dfbb92d9e","Type":"ContainerStarted","Data":"2c1d7cb2070e99f53eda7eecdba61f2cb2f2db390e3c6fa4cc5637abb76f845a"} Jan 26 11:11:04 crc kubenswrapper[4619]: I0126 11:11:04.720920 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-24r9j"] Jan 26 11:11:04 crc kubenswrapper[4619]: I0126 11:11:04.752600 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-d565h"] Jan 26 11:11:04 crc kubenswrapper[4619]: I0126 11:11:04.754042 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-d565h" Jan 26 11:11:04 crc kubenswrapper[4619]: I0126 11:11:04.775596 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-d565h"] Jan 26 11:11:04 crc kubenswrapper[4619]: I0126 11:11:04.838778 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5471b09c-ff65-46bc-8d3e-27fe2f881646-config\") pod \"dnsmasq-dns-666b6646f7-d565h\" (UID: \"5471b09c-ff65-46bc-8d3e-27fe2f881646\") " pod="openstack/dnsmasq-dns-666b6646f7-d565h" Jan 26 11:11:04 crc kubenswrapper[4619]: I0126 11:11:04.838876 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdtkt\" (UniqueName: \"kubernetes.io/projected/5471b09c-ff65-46bc-8d3e-27fe2f881646-kube-api-access-qdtkt\") pod \"dnsmasq-dns-666b6646f7-d565h\" (UID: \"5471b09c-ff65-46bc-8d3e-27fe2f881646\") " pod="openstack/dnsmasq-dns-666b6646f7-d565h" Jan 26 11:11:04 crc kubenswrapper[4619]: I0126 11:11:04.838896 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5471b09c-ff65-46bc-8d3e-27fe2f881646-dns-svc\") pod \"dnsmasq-dns-666b6646f7-d565h\" (UID: \"5471b09c-ff65-46bc-8d3e-27fe2f881646\") " pod="openstack/dnsmasq-dns-666b6646f7-d565h" Jan 26 11:11:04 crc kubenswrapper[4619]: I0126 11:11:04.939897 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5471b09c-ff65-46bc-8d3e-27fe2f881646-config\") pod \"dnsmasq-dns-666b6646f7-d565h\" (UID: \"5471b09c-ff65-46bc-8d3e-27fe2f881646\") " pod="openstack/dnsmasq-dns-666b6646f7-d565h" Jan 26 11:11:04 crc kubenswrapper[4619]: I0126 11:11:04.939994 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdtkt\" (UniqueName: \"kubernetes.io/projected/5471b09c-ff65-46bc-8d3e-27fe2f881646-kube-api-access-qdtkt\") pod \"dnsmasq-dns-666b6646f7-d565h\" (UID: \"5471b09c-ff65-46bc-8d3e-27fe2f881646\") " pod="openstack/dnsmasq-dns-666b6646f7-d565h" Jan 26 11:11:04 crc kubenswrapper[4619]: I0126 11:11:04.940013 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5471b09c-ff65-46bc-8d3e-27fe2f881646-dns-svc\") pod \"dnsmasq-dns-666b6646f7-d565h\" (UID: \"5471b09c-ff65-46bc-8d3e-27fe2f881646\") " pod="openstack/dnsmasq-dns-666b6646f7-d565h" Jan 26 11:11:04 crc kubenswrapper[4619]: I0126 11:11:04.940878 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5471b09c-ff65-46bc-8d3e-27fe2f881646-dns-svc\") pod \"dnsmasq-dns-666b6646f7-d565h\" (UID: \"5471b09c-ff65-46bc-8d3e-27fe2f881646\") " pod="openstack/dnsmasq-dns-666b6646f7-d565h" Jan 26 11:11:04 crc kubenswrapper[4619]: I0126 11:11:04.941534 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5471b09c-ff65-46bc-8d3e-27fe2f881646-config\") pod \"dnsmasq-dns-666b6646f7-d565h\" (UID: \"5471b09c-ff65-46bc-8d3e-27fe2f881646\") " pod="openstack/dnsmasq-dns-666b6646f7-d565h" Jan 26 11:11:04 crc kubenswrapper[4619]: I0126 11:11:04.976712 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdtkt\" (UniqueName: \"kubernetes.io/projected/5471b09c-ff65-46bc-8d3e-27fe2f881646-kube-api-access-qdtkt\") pod \"dnsmasq-dns-666b6646f7-d565h\" (UID: \"5471b09c-ff65-46bc-8d3e-27fe2f881646\") " pod="openstack/dnsmasq-dns-666b6646f7-d565h" Jan 26 11:11:05 crc kubenswrapper[4619]: I0126 11:11:05.086981 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-d565h" Jan 26 11:11:05 crc kubenswrapper[4619]: I0126 11:11:05.145762 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-bpxpr"] Jan 26 11:11:05 crc kubenswrapper[4619]: I0126 11:11:05.189941 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-s8lf4"] Jan 26 11:11:05 crc kubenswrapper[4619]: I0126 11:11:05.201035 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-s8lf4" Jan 26 11:11:05 crc kubenswrapper[4619]: I0126 11:11:05.201165 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-s8lf4"] Jan 26 11:11:05 crc kubenswrapper[4619]: I0126 11:11:05.351937 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cbd9dce7-cf00-4b71-855b-f8d1fbe41736-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-s8lf4\" (UID: \"cbd9dce7-cf00-4b71-855b-f8d1fbe41736\") " pod="openstack/dnsmasq-dns-57d769cc4f-s8lf4" Jan 26 11:11:05 crc kubenswrapper[4619]: I0126 11:11:05.352041 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tzbh\" (UniqueName: \"kubernetes.io/projected/cbd9dce7-cf00-4b71-855b-f8d1fbe41736-kube-api-access-2tzbh\") pod \"dnsmasq-dns-57d769cc4f-s8lf4\" (UID: \"cbd9dce7-cf00-4b71-855b-f8d1fbe41736\") " pod="openstack/dnsmasq-dns-57d769cc4f-s8lf4" Jan 26 11:11:05 crc kubenswrapper[4619]: I0126 11:11:05.352102 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbd9dce7-cf00-4b71-855b-f8d1fbe41736-config\") pod \"dnsmasq-dns-57d769cc4f-s8lf4\" (UID: \"cbd9dce7-cf00-4b71-855b-f8d1fbe41736\") " pod="openstack/dnsmasq-dns-57d769cc4f-s8lf4" Jan 26 11:11:05 crc kubenswrapper[4619]: I0126 11:11:05.453896 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbd9dce7-cf00-4b71-855b-f8d1fbe41736-config\") pod \"dnsmasq-dns-57d769cc4f-s8lf4\" (UID: \"cbd9dce7-cf00-4b71-855b-f8d1fbe41736\") " pod="openstack/dnsmasq-dns-57d769cc4f-s8lf4" Jan 26 11:11:05 crc kubenswrapper[4619]: I0126 11:11:05.453950 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cbd9dce7-cf00-4b71-855b-f8d1fbe41736-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-s8lf4\" (UID: \"cbd9dce7-cf00-4b71-855b-f8d1fbe41736\") " pod="openstack/dnsmasq-dns-57d769cc4f-s8lf4" Jan 26 11:11:05 crc kubenswrapper[4619]: I0126 11:11:05.454036 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tzbh\" (UniqueName: \"kubernetes.io/projected/cbd9dce7-cf00-4b71-855b-f8d1fbe41736-kube-api-access-2tzbh\") pod \"dnsmasq-dns-57d769cc4f-s8lf4\" (UID: \"cbd9dce7-cf00-4b71-855b-f8d1fbe41736\") " pod="openstack/dnsmasq-dns-57d769cc4f-s8lf4" Jan 26 11:11:05 crc kubenswrapper[4619]: I0126 11:11:05.455742 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbd9dce7-cf00-4b71-855b-f8d1fbe41736-config\") pod \"dnsmasq-dns-57d769cc4f-s8lf4\" (UID: \"cbd9dce7-cf00-4b71-855b-f8d1fbe41736\") " pod="openstack/dnsmasq-dns-57d769cc4f-s8lf4" Jan 26 11:11:05 crc kubenswrapper[4619]: I0126 11:11:05.456330 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cbd9dce7-cf00-4b71-855b-f8d1fbe41736-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-s8lf4\" (UID: \"cbd9dce7-cf00-4b71-855b-f8d1fbe41736\") " pod="openstack/dnsmasq-dns-57d769cc4f-s8lf4" Jan 26 11:11:05 crc kubenswrapper[4619]: I0126 11:11:05.492988 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tzbh\" (UniqueName: \"kubernetes.io/projected/cbd9dce7-cf00-4b71-855b-f8d1fbe41736-kube-api-access-2tzbh\") pod \"dnsmasq-dns-57d769cc4f-s8lf4\" (UID: \"cbd9dce7-cf00-4b71-855b-f8d1fbe41736\") " pod="openstack/dnsmasq-dns-57d769cc4f-s8lf4" Jan 26 11:11:05 crc kubenswrapper[4619]: I0126 11:11:05.578798 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-s8lf4" Jan 26 11:11:05 crc kubenswrapper[4619]: I0126 11:11:05.825096 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-d565h"] Jan 26 11:11:05 crc kubenswrapper[4619]: W0126 11:11:05.860473 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5471b09c_ff65_46bc_8d3e_27fe2f881646.slice/crio-2b0625b1ed4e5622b90760edc6e47db6307b69000c4f7abf9521c539100679d9 WatchSource:0}: Error finding container 2b0625b1ed4e5622b90760edc6e47db6307b69000c4f7abf9521c539100679d9: Status 404 returned error can't find the container with id 2b0625b1ed4e5622b90760edc6e47db6307b69000c4f7abf9521c539100679d9 Jan 26 11:11:05 crc kubenswrapper[4619]: I0126 11:11:05.939679 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 11:11:05 crc kubenswrapper[4619]: I0126 11:11:05.949019 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 11:11:05 crc kubenswrapper[4619]: I0126 11:11:05.956067 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 26 11:11:05 crc kubenswrapper[4619]: I0126 11:11:05.956627 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 26 11:11:05 crc kubenswrapper[4619]: I0126 11:11:05.956796 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 26 11:11:05 crc kubenswrapper[4619]: I0126 11:11:05.956951 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 26 11:11:05 crc kubenswrapper[4619]: I0126 11:11:05.957067 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 26 11:11:05 crc kubenswrapper[4619]: I0126 11:11:05.957666 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 26 11:11:05 crc kubenswrapper[4619]: I0126 11:11:05.957711 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-sl2kv" Jan 26 11:11:05 crc kubenswrapper[4619]: I0126 11:11:05.962414 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.062355 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.062401 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.062426 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-config-data\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.062443 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw2x5\" (UniqueName: \"kubernetes.io/projected/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-kube-api-access-cw2x5\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.062464 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.062485 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.062503 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.062522 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.062535 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.062548 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.062583 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.111234 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-d565h" event={"ID":"5471b09c-ff65-46bc-8d3e-27fe2f881646","Type":"ContainerStarted","Data":"2b0625b1ed4e5622b90760edc6e47db6307b69000c4f7abf9521c539100679d9"} Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.123934 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-s8lf4"] Jan 26 11:11:06 crc kubenswrapper[4619]: W0126 11:11:06.141337 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbd9dce7_cf00_4b71_855b_f8d1fbe41736.slice/crio-0a6d8b1a6e02b5f4079620f1f6a6fc12e71f5a21c02646d1a42db5022889672f WatchSource:0}: Error finding container 0a6d8b1a6e02b5f4079620f1f6a6fc12e71f5a21c02646d1a42db5022889672f: Status 404 returned error can't find the container with id 0a6d8b1a6e02b5f4079620f1f6a6fc12e71f5a21c02646d1a42db5022889672f Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.164327 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.164412 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.164453 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.164473 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw2x5\" (UniqueName: \"kubernetes.io/projected/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-kube-api-access-cw2x5\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.164488 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-config-data\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.164508 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.164530 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.164550 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.164568 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.164581 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.164595 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.165780 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-config-data\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.166786 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.166912 4619 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.167101 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.168019 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.168419 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.172787 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.173371 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.176510 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.187389 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.194639 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw2x5\" (UniqueName: \"kubernetes.io/projected/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-kube-api-access-cw2x5\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.218019 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.308560 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.331174 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.332256 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.339002 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.341948 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.342144 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.342302 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.342805 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.343093 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.343336 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-2pzqt" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.361709 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.469287 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/213a8fd2-1f05-4287-b7f2-dfcd18d94399-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.469358 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/213a8fd2-1f05-4287-b7f2-dfcd18d94399-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.469380 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/213a8fd2-1f05-4287-b7f2-dfcd18d94399-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.469426 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njt9b\" (UniqueName: \"kubernetes.io/projected/213a8fd2-1f05-4287-b7f2-dfcd18d94399-kube-api-access-njt9b\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.469465 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/213a8fd2-1f05-4287-b7f2-dfcd18d94399-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.469481 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/213a8fd2-1f05-4287-b7f2-dfcd18d94399-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.469501 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/213a8fd2-1f05-4287-b7f2-dfcd18d94399-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.469519 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/213a8fd2-1f05-4287-b7f2-dfcd18d94399-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.469540 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/213a8fd2-1f05-4287-b7f2-dfcd18d94399-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.469566 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.469601 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/213a8fd2-1f05-4287-b7f2-dfcd18d94399-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.570558 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/213a8fd2-1f05-4287-b7f2-dfcd18d94399-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.570867 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/213a8fd2-1f05-4287-b7f2-dfcd18d94399-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.570890 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/213a8fd2-1f05-4287-b7f2-dfcd18d94399-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.570932 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njt9b\" (UniqueName: \"kubernetes.io/projected/213a8fd2-1f05-4287-b7f2-dfcd18d94399-kube-api-access-njt9b\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.571027 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/213a8fd2-1f05-4287-b7f2-dfcd18d94399-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.571049 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/213a8fd2-1f05-4287-b7f2-dfcd18d94399-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.571067 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/213a8fd2-1f05-4287-b7f2-dfcd18d94399-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.571085 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/213a8fd2-1f05-4287-b7f2-dfcd18d94399-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.571110 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/213a8fd2-1f05-4287-b7f2-dfcd18d94399-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.571132 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.571169 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/213a8fd2-1f05-4287-b7f2-dfcd18d94399-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.571954 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/213a8fd2-1f05-4287-b7f2-dfcd18d94399-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.572712 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/213a8fd2-1f05-4287-b7f2-dfcd18d94399-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.573605 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/213a8fd2-1f05-4287-b7f2-dfcd18d94399-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.572305 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/213a8fd2-1f05-4287-b7f2-dfcd18d94399-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.574895 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/213a8fd2-1f05-4287-b7f2-dfcd18d94399-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.576142 4619 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.578052 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/213a8fd2-1f05-4287-b7f2-dfcd18d94399-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.585354 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/213a8fd2-1f05-4287-b7f2-dfcd18d94399-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.586119 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/213a8fd2-1f05-4287-b7f2-dfcd18d94399-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.589739 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/213a8fd2-1f05-4287-b7f2-dfcd18d94399-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.591111 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njt9b\" (UniqueName: \"kubernetes.io/projected/213a8fd2-1f05-4287-b7f2-dfcd18d94399-kube-api-access-njt9b\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.605820 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.657467 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:11:06 crc kubenswrapper[4619]: I0126 11:11:06.872978 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 11:11:06 crc kubenswrapper[4619]: W0126 11:11:06.882226 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33fc7d1e_2f71_40fc_ab04_a3d88fc1f3bf.slice/crio-6ba862812bf14c204f81eb75ff540e85170362cc14ed22eb8ddfe667205d2c2f WatchSource:0}: Error finding container 6ba862812bf14c204f81eb75ff540e85170362cc14ed22eb8ddfe667205d2c2f: Status 404 returned error can't find the container with id 6ba862812bf14c204f81eb75ff540e85170362cc14ed22eb8ddfe667205d2c2f Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.121703 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-s8lf4" event={"ID":"cbd9dce7-cf00-4b71-855b-f8d1fbe41736","Type":"ContainerStarted","Data":"0a6d8b1a6e02b5f4079620f1f6a6fc12e71f5a21c02646d1a42db5022889672f"} Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.128087 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf","Type":"ContainerStarted","Data":"6ba862812bf14c204f81eb75ff540e85170362cc14ed22eb8ddfe667205d2c2f"} Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.291774 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.347243 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.348673 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.356005 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.356258 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.356603 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-p74kg" Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.357340 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.373676 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.381822 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.493274 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f5811c2-1a5b-4fc0-aa98-a6604f266891-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8f5811c2-1a5b-4fc0-aa98-a6604f266891\") " pod="openstack/openstack-galera-0" Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.493315 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rk2v\" (UniqueName: \"kubernetes.io/projected/8f5811c2-1a5b-4fc0-aa98-a6604f266891-kube-api-access-5rk2v\") pod \"openstack-galera-0\" (UID: \"8f5811c2-1a5b-4fc0-aa98-a6604f266891\") " pod="openstack/openstack-galera-0" Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.493353 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8f5811c2-1a5b-4fc0-aa98-a6604f266891-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8f5811c2-1a5b-4fc0-aa98-a6604f266891\") " pod="openstack/openstack-galera-0" Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.493368 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f5811c2-1a5b-4fc0-aa98-a6604f266891-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8f5811c2-1a5b-4fc0-aa98-a6604f266891\") " pod="openstack/openstack-galera-0" Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.493885 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8f5811c2-1a5b-4fc0-aa98-a6604f266891-config-data-default\") pod \"openstack-galera-0\" (UID: \"8f5811c2-1a5b-4fc0-aa98-a6604f266891\") " pod="openstack/openstack-galera-0" Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.494219 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5811c2-1a5b-4fc0-aa98-a6604f266891-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8f5811c2-1a5b-4fc0-aa98-a6604f266891\") " pod="openstack/openstack-galera-0" Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.494386 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f5811c2-1a5b-4fc0-aa98-a6604f266891-kolla-config\") pod \"openstack-galera-0\" (UID: \"8f5811c2-1a5b-4fc0-aa98-a6604f266891\") " pod="openstack/openstack-galera-0" Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.494497 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"8f5811c2-1a5b-4fc0-aa98-a6604f266891\") " pod="openstack/openstack-galera-0" Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.596761 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f5811c2-1a5b-4fc0-aa98-a6604f266891-kolla-config\") pod \"openstack-galera-0\" (UID: \"8f5811c2-1a5b-4fc0-aa98-a6604f266891\") " pod="openstack/openstack-galera-0" Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.596813 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"8f5811c2-1a5b-4fc0-aa98-a6604f266891\") " pod="openstack/openstack-galera-0" Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.596850 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f5811c2-1a5b-4fc0-aa98-a6604f266891-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8f5811c2-1a5b-4fc0-aa98-a6604f266891\") " pod="openstack/openstack-galera-0" Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.596867 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rk2v\" (UniqueName: \"kubernetes.io/projected/8f5811c2-1a5b-4fc0-aa98-a6604f266891-kube-api-access-5rk2v\") pod \"openstack-galera-0\" (UID: \"8f5811c2-1a5b-4fc0-aa98-a6604f266891\") " pod="openstack/openstack-galera-0" Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.596916 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8f5811c2-1a5b-4fc0-aa98-a6604f266891-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8f5811c2-1a5b-4fc0-aa98-a6604f266891\") " pod="openstack/openstack-galera-0" Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.596934 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f5811c2-1a5b-4fc0-aa98-a6604f266891-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8f5811c2-1a5b-4fc0-aa98-a6604f266891\") " pod="openstack/openstack-galera-0" Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.596972 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8f5811c2-1a5b-4fc0-aa98-a6604f266891-config-data-default\") pod \"openstack-galera-0\" (UID: \"8f5811c2-1a5b-4fc0-aa98-a6604f266891\") " pod="openstack/openstack-galera-0" Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.597011 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5811c2-1a5b-4fc0-aa98-a6604f266891-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8f5811c2-1a5b-4fc0-aa98-a6604f266891\") " pod="openstack/openstack-galera-0" Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.600187 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f5811c2-1a5b-4fc0-aa98-a6604f266891-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8f5811c2-1a5b-4fc0-aa98-a6604f266891\") " pod="openstack/openstack-galera-0" Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.600410 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8f5811c2-1a5b-4fc0-aa98-a6604f266891-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8f5811c2-1a5b-4fc0-aa98-a6604f266891\") " pod="openstack/openstack-galera-0" Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.601013 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8f5811c2-1a5b-4fc0-aa98-a6604f266891-config-data-default\") pod \"openstack-galera-0\" (UID: \"8f5811c2-1a5b-4fc0-aa98-a6604f266891\") " pod="openstack/openstack-galera-0" Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.601764 4619 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"8f5811c2-1a5b-4fc0-aa98-a6604f266891\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-galera-0" Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.606335 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f5811c2-1a5b-4fc0-aa98-a6604f266891-kolla-config\") pod \"openstack-galera-0\" (UID: \"8f5811c2-1a5b-4fc0-aa98-a6604f266891\") " pod="openstack/openstack-galera-0" Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.611873 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5811c2-1a5b-4fc0-aa98-a6604f266891-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8f5811c2-1a5b-4fc0-aa98-a6604f266891\") " pod="openstack/openstack-galera-0" Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.629506 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f5811c2-1a5b-4fc0-aa98-a6604f266891-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8f5811c2-1a5b-4fc0-aa98-a6604f266891\") " pod="openstack/openstack-galera-0" Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.646970 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"8f5811c2-1a5b-4fc0-aa98-a6604f266891\") " pod="openstack/openstack-galera-0" Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.653204 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rk2v\" (UniqueName: \"kubernetes.io/projected/8f5811c2-1a5b-4fc0-aa98-a6604f266891-kube-api-access-5rk2v\") pod \"openstack-galera-0\" (UID: \"8f5811c2-1a5b-4fc0-aa98-a6604f266891\") " pod="openstack/openstack-galera-0" Jan 26 11:11:07 crc kubenswrapper[4619]: I0126 11:11:07.674987 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.148550 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"213a8fd2-1f05-4287-b7f2-dfcd18d94399","Type":"ContainerStarted","Data":"6e018d56cca18b5941a952e4672f515c39d0a81bd650fb26bc25d20a36d82632"} Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.499876 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.681588 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.683212 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.686168 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.688718 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-vbcdb" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.689141 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.689835 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.702967 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.788542 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.792083 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.797795 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.800898 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-vnglg" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.802079 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.843702 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.855688 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/675ad44b-ca9d-4f4c-947b-06184a5db736-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"675ad44b-ca9d-4f4c-947b-06184a5db736\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.856202 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/675ad44b-ca9d-4f4c-947b-06184a5db736-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"675ad44b-ca9d-4f4c-947b-06184a5db736\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.856268 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"675ad44b-ca9d-4f4c-947b-06184a5db736\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.856360 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/675ad44b-ca9d-4f4c-947b-06184a5db736-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"675ad44b-ca9d-4f4c-947b-06184a5db736\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.856410 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdblb\" (UniqueName: \"kubernetes.io/projected/675ad44b-ca9d-4f4c-947b-06184a5db736-kube-api-access-wdblb\") pod \"openstack-cell1-galera-0\" (UID: \"675ad44b-ca9d-4f4c-947b-06184a5db736\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.856443 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/675ad44b-ca9d-4f4c-947b-06184a5db736-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"675ad44b-ca9d-4f4c-947b-06184a5db736\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.856464 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/675ad44b-ca9d-4f4c-947b-06184a5db736-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"675ad44b-ca9d-4f4c-947b-06184a5db736\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.856516 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/675ad44b-ca9d-4f4c-947b-06184a5db736-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"675ad44b-ca9d-4f4c-947b-06184a5db736\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.958826 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/675ad44b-ca9d-4f4c-947b-06184a5db736-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"675ad44b-ca9d-4f4c-947b-06184a5db736\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.958890 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8fdb3f80-9734-437b-94c1-6abcc8ce995f-config-data\") pod \"memcached-0\" (UID: \"8fdb3f80-9734-437b-94c1-6abcc8ce995f\") " pod="openstack/memcached-0" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.958917 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/675ad44b-ca9d-4f4c-947b-06184a5db736-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"675ad44b-ca9d-4f4c-947b-06184a5db736\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.958956 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"675ad44b-ca9d-4f4c-947b-06184a5db736\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.958992 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8fdb3f80-9734-437b-94c1-6abcc8ce995f-kolla-config\") pod \"memcached-0\" (UID: \"8fdb3f80-9734-437b-94c1-6abcc8ce995f\") " pod="openstack/memcached-0" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.959041 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtp6r\" (UniqueName: \"kubernetes.io/projected/8fdb3f80-9734-437b-94c1-6abcc8ce995f-kube-api-access-rtp6r\") pod \"memcached-0\" (UID: \"8fdb3f80-9734-437b-94c1-6abcc8ce995f\") " pod="openstack/memcached-0" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.959057 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fdb3f80-9734-437b-94c1-6abcc8ce995f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8fdb3f80-9734-437b-94c1-6abcc8ce995f\") " pod="openstack/memcached-0" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.959085 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/675ad44b-ca9d-4f4c-947b-06184a5db736-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"675ad44b-ca9d-4f4c-947b-06184a5db736\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.959128 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdblb\" (UniqueName: \"kubernetes.io/projected/675ad44b-ca9d-4f4c-947b-06184a5db736-kube-api-access-wdblb\") pod \"openstack-cell1-galera-0\" (UID: \"675ad44b-ca9d-4f4c-947b-06184a5db736\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.959149 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/675ad44b-ca9d-4f4c-947b-06184a5db736-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"675ad44b-ca9d-4f4c-947b-06184a5db736\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.959186 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/675ad44b-ca9d-4f4c-947b-06184a5db736-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"675ad44b-ca9d-4f4c-947b-06184a5db736\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.959210 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/675ad44b-ca9d-4f4c-947b-06184a5db736-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"675ad44b-ca9d-4f4c-947b-06184a5db736\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.959231 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fdb3f80-9734-437b-94c1-6abcc8ce995f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8fdb3f80-9734-437b-94c1-6abcc8ce995f\") " pod="openstack/memcached-0" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.959935 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/675ad44b-ca9d-4f4c-947b-06184a5db736-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"675ad44b-ca9d-4f4c-947b-06184a5db736\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.961361 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/675ad44b-ca9d-4f4c-947b-06184a5db736-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"675ad44b-ca9d-4f4c-947b-06184a5db736\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.961393 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/675ad44b-ca9d-4f4c-947b-06184a5db736-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"675ad44b-ca9d-4f4c-947b-06184a5db736\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.961577 4619 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"675ad44b-ca9d-4f4c-947b-06184a5db736\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.961608 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/675ad44b-ca9d-4f4c-947b-06184a5db736-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"675ad44b-ca9d-4f4c-947b-06184a5db736\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.976952 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/675ad44b-ca9d-4f4c-947b-06184a5db736-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"675ad44b-ca9d-4f4c-947b-06184a5db736\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.992275 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/675ad44b-ca9d-4f4c-947b-06184a5db736-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"675ad44b-ca9d-4f4c-947b-06184a5db736\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:11:08 crc kubenswrapper[4619]: I0126 11:11:08.993561 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdblb\" (UniqueName: \"kubernetes.io/projected/675ad44b-ca9d-4f4c-947b-06184a5db736-kube-api-access-wdblb\") pod \"openstack-cell1-galera-0\" (UID: \"675ad44b-ca9d-4f4c-947b-06184a5db736\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:11:09 crc kubenswrapper[4619]: I0126 11:11:09.033889 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"675ad44b-ca9d-4f4c-947b-06184a5db736\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:11:09 crc kubenswrapper[4619]: I0126 11:11:09.049240 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 11:11:09 crc kubenswrapper[4619]: I0126 11:11:09.070579 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8fdb3f80-9734-437b-94c1-6abcc8ce995f-config-data\") pod \"memcached-0\" (UID: \"8fdb3f80-9734-437b-94c1-6abcc8ce995f\") " pod="openstack/memcached-0" Jan 26 11:11:09 crc kubenswrapper[4619]: I0126 11:11:09.070729 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8fdb3f80-9734-437b-94c1-6abcc8ce995f-kolla-config\") pod \"memcached-0\" (UID: \"8fdb3f80-9734-437b-94c1-6abcc8ce995f\") " pod="openstack/memcached-0" Jan 26 11:11:09 crc kubenswrapper[4619]: I0126 11:11:09.070795 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtp6r\" (UniqueName: \"kubernetes.io/projected/8fdb3f80-9734-437b-94c1-6abcc8ce995f-kube-api-access-rtp6r\") pod \"memcached-0\" (UID: \"8fdb3f80-9734-437b-94c1-6abcc8ce995f\") " pod="openstack/memcached-0" Jan 26 11:11:09 crc kubenswrapper[4619]: I0126 11:11:09.070822 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fdb3f80-9734-437b-94c1-6abcc8ce995f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8fdb3f80-9734-437b-94c1-6abcc8ce995f\") " pod="openstack/memcached-0" Jan 26 11:11:09 crc kubenswrapper[4619]: I0126 11:11:09.070970 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fdb3f80-9734-437b-94c1-6abcc8ce995f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8fdb3f80-9734-437b-94c1-6abcc8ce995f\") " pod="openstack/memcached-0" Jan 26 11:11:09 crc kubenswrapper[4619]: I0126 11:11:09.072647 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8fdb3f80-9734-437b-94c1-6abcc8ce995f-kolla-config\") pod \"memcached-0\" (UID: \"8fdb3f80-9734-437b-94c1-6abcc8ce995f\") " pod="openstack/memcached-0" Jan 26 11:11:09 crc kubenswrapper[4619]: I0126 11:11:09.072804 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8fdb3f80-9734-437b-94c1-6abcc8ce995f-config-data\") pod \"memcached-0\" (UID: \"8fdb3f80-9734-437b-94c1-6abcc8ce995f\") " pod="openstack/memcached-0" Jan 26 11:11:09 crc kubenswrapper[4619]: I0126 11:11:09.076551 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fdb3f80-9734-437b-94c1-6abcc8ce995f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8fdb3f80-9734-437b-94c1-6abcc8ce995f\") " pod="openstack/memcached-0" Jan 26 11:11:09 crc kubenswrapper[4619]: I0126 11:11:09.079971 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fdb3f80-9734-437b-94c1-6abcc8ce995f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8fdb3f80-9734-437b-94c1-6abcc8ce995f\") " pod="openstack/memcached-0" Jan 26 11:11:09 crc kubenswrapper[4619]: I0126 11:11:09.092124 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtp6r\" (UniqueName: \"kubernetes.io/projected/8fdb3f80-9734-437b-94c1-6abcc8ce995f-kube-api-access-rtp6r\") pod \"memcached-0\" (UID: \"8fdb3f80-9734-437b-94c1-6abcc8ce995f\") " pod="openstack/memcached-0" Jan 26 11:11:09 crc kubenswrapper[4619]: I0126 11:11:09.124023 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 11:11:09 crc kubenswrapper[4619]: I0126 11:11:09.205086 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8f5811c2-1a5b-4fc0-aa98-a6604f266891","Type":"ContainerStarted","Data":"ab5513b1c76e778e74b6aebd12291c333eb1ffb96abccb244d48ee4cc6732361"} Jan 26 11:11:09 crc kubenswrapper[4619]: I0126 11:11:09.741371 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 26 11:11:09 crc kubenswrapper[4619]: I0126 11:11:09.748185 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 11:11:09 crc kubenswrapper[4619]: W0126 11:11:09.761853 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod675ad44b_ca9d_4f4c_947b_06184a5db736.slice/crio-db2354de72a3603491e0e578cd63c55c8c274805a657959e5676476d2ec970f2 WatchSource:0}: Error finding container db2354de72a3603491e0e578cd63c55c8c274805a657959e5676476d2ec970f2: Status 404 returned error can't find the container with id db2354de72a3603491e0e578cd63c55c8c274805a657959e5676476d2ec970f2 Jan 26 11:11:10 crc kubenswrapper[4619]: I0126 11:11:10.237932 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8fdb3f80-9734-437b-94c1-6abcc8ce995f","Type":"ContainerStarted","Data":"32a8b23458de2d7f6838b0031a2138bf80aaf734f83eec138277084a9466bf06"} Jan 26 11:11:10 crc kubenswrapper[4619]: I0126 11:11:10.279772 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"675ad44b-ca9d-4f4c-947b-06184a5db736","Type":"ContainerStarted","Data":"db2354de72a3603491e0e578cd63c55c8c274805a657959e5676476d2ec970f2"} Jan 26 11:11:10 crc kubenswrapper[4619]: I0126 11:11:10.791053 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 11:11:10 crc kubenswrapper[4619]: I0126 11:11:10.791972 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 11:11:10 crc kubenswrapper[4619]: I0126 11:11:10.806426 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-bdgpq" Jan 26 11:11:10 crc kubenswrapper[4619]: I0126 11:11:10.836757 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 11:11:10 crc kubenswrapper[4619]: I0126 11:11:10.848293 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbr4f\" (UniqueName: \"kubernetes.io/projected/d4e064ab-47fc-497d-b783-9debc84b2c7a-kube-api-access-lbr4f\") pod \"kube-state-metrics-0\" (UID: \"d4e064ab-47fc-497d-b783-9debc84b2c7a\") " pod="openstack/kube-state-metrics-0" Jan 26 11:11:10 crc kubenswrapper[4619]: I0126 11:11:10.959229 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbr4f\" (UniqueName: \"kubernetes.io/projected/d4e064ab-47fc-497d-b783-9debc84b2c7a-kube-api-access-lbr4f\") pod \"kube-state-metrics-0\" (UID: \"d4e064ab-47fc-497d-b783-9debc84b2c7a\") " pod="openstack/kube-state-metrics-0" Jan 26 11:11:11 crc kubenswrapper[4619]: I0126 11:11:11.002349 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbr4f\" (UniqueName: \"kubernetes.io/projected/d4e064ab-47fc-497d-b783-9debc84b2c7a-kube-api-access-lbr4f\") pod \"kube-state-metrics-0\" (UID: \"d4e064ab-47fc-497d-b783-9debc84b2c7a\") " pod="openstack/kube-state-metrics-0" Jan 26 11:11:11 crc kubenswrapper[4619]: I0126 11:11:11.122081 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.234015 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.234340 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.234780 4619 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.235502 4619 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1c10eca96de3abc38af2c9c686eee98e2b56a0138fc48edb624175300fb0caff"} pod="openshift-machine-config-operator/machine-config-daemon-28hd4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.235574 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" containerID="cri-o://1c10eca96de3abc38af2c9c686eee98e2b56a0138fc48edb624175300fb0caff" gracePeriod=600 Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.717599 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-djjzm"] Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.724768 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-djjzm"] Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.724876 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-djjzm" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.727089 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.738601 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-rbq2j" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.738787 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.756435 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-sq2gq"] Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.776097 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-sq2gq" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.838801 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b814fe04-5ad5-4a1f-b49b-9f38ea5be2da-var-run-ovn\") pod \"ovn-controller-djjzm\" (UID: \"b814fe04-5ad5-4a1f-b49b-9f38ea5be2da\") " pod="openstack/ovn-controller-djjzm" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.838925 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b814fe04-5ad5-4a1f-b49b-9f38ea5be2da-combined-ca-bundle\") pod \"ovn-controller-djjzm\" (UID: \"b814fe04-5ad5-4a1f-b49b-9f38ea5be2da\") " pod="openstack/ovn-controller-djjzm" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.838995 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/1778a60a-b3d9-4f16-a8d4-8c0adf54524f-var-log\") pod \"ovn-controller-ovs-sq2gq\" (UID: \"1778a60a-b3d9-4f16-a8d4-8c0adf54524f\") " pod="openstack/ovn-controller-ovs-sq2gq" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.839135 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l29t9\" (UniqueName: \"kubernetes.io/projected/b814fe04-5ad5-4a1f-b49b-9f38ea5be2da-kube-api-access-l29t9\") pod \"ovn-controller-djjzm\" (UID: \"b814fe04-5ad5-4a1f-b49b-9f38ea5be2da\") " pod="openstack/ovn-controller-djjzm" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.839170 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b814fe04-5ad5-4a1f-b49b-9f38ea5be2da-ovn-controller-tls-certs\") pod \"ovn-controller-djjzm\" (UID: \"b814fe04-5ad5-4a1f-b49b-9f38ea5be2da\") " pod="openstack/ovn-controller-djjzm" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.839228 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/1778a60a-b3d9-4f16-a8d4-8c0adf54524f-var-lib\") pod \"ovn-controller-ovs-sq2gq\" (UID: \"1778a60a-b3d9-4f16-a8d4-8c0adf54524f\") " pod="openstack/ovn-controller-ovs-sq2gq" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.839272 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1778a60a-b3d9-4f16-a8d4-8c0adf54524f-scripts\") pod \"ovn-controller-ovs-sq2gq\" (UID: \"1778a60a-b3d9-4f16-a8d4-8c0adf54524f\") " pod="openstack/ovn-controller-ovs-sq2gq" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.839299 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b814fe04-5ad5-4a1f-b49b-9f38ea5be2da-var-log-ovn\") pod \"ovn-controller-djjzm\" (UID: \"b814fe04-5ad5-4a1f-b49b-9f38ea5be2da\") " pod="openstack/ovn-controller-djjzm" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.839378 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b814fe04-5ad5-4a1f-b49b-9f38ea5be2da-scripts\") pod \"ovn-controller-djjzm\" (UID: \"b814fe04-5ad5-4a1f-b49b-9f38ea5be2da\") " pod="openstack/ovn-controller-djjzm" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.839498 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dwfj\" (UniqueName: \"kubernetes.io/projected/1778a60a-b3d9-4f16-a8d4-8c0adf54524f-kube-api-access-2dwfj\") pod \"ovn-controller-ovs-sq2gq\" (UID: \"1778a60a-b3d9-4f16-a8d4-8c0adf54524f\") " pod="openstack/ovn-controller-ovs-sq2gq" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.839524 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b814fe04-5ad5-4a1f-b49b-9f38ea5be2da-var-run\") pod \"ovn-controller-djjzm\" (UID: \"b814fe04-5ad5-4a1f-b49b-9f38ea5be2da\") " pod="openstack/ovn-controller-djjzm" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.839553 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1778a60a-b3d9-4f16-a8d4-8c0adf54524f-var-run\") pod \"ovn-controller-ovs-sq2gq\" (UID: \"1778a60a-b3d9-4f16-a8d4-8c0adf54524f\") " pod="openstack/ovn-controller-ovs-sq2gq" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.839594 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/1778a60a-b3d9-4f16-a8d4-8c0adf54524f-etc-ovs\") pod \"ovn-controller-ovs-sq2gq\" (UID: \"1778a60a-b3d9-4f16-a8d4-8c0adf54524f\") " pod="openstack/ovn-controller-ovs-sq2gq" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.851820 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-sq2gq"] Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.942562 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dwfj\" (UniqueName: \"kubernetes.io/projected/1778a60a-b3d9-4f16-a8d4-8c0adf54524f-kube-api-access-2dwfj\") pod \"ovn-controller-ovs-sq2gq\" (UID: \"1778a60a-b3d9-4f16-a8d4-8c0adf54524f\") " pod="openstack/ovn-controller-ovs-sq2gq" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.942648 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b814fe04-5ad5-4a1f-b49b-9f38ea5be2da-var-run\") pod \"ovn-controller-djjzm\" (UID: \"b814fe04-5ad5-4a1f-b49b-9f38ea5be2da\") " pod="openstack/ovn-controller-djjzm" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.942674 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1778a60a-b3d9-4f16-a8d4-8c0adf54524f-var-run\") pod \"ovn-controller-ovs-sq2gq\" (UID: \"1778a60a-b3d9-4f16-a8d4-8c0adf54524f\") " pod="openstack/ovn-controller-ovs-sq2gq" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.942706 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/1778a60a-b3d9-4f16-a8d4-8c0adf54524f-etc-ovs\") pod \"ovn-controller-ovs-sq2gq\" (UID: \"1778a60a-b3d9-4f16-a8d4-8c0adf54524f\") " pod="openstack/ovn-controller-ovs-sq2gq" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.942748 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b814fe04-5ad5-4a1f-b49b-9f38ea5be2da-var-run-ovn\") pod \"ovn-controller-djjzm\" (UID: \"b814fe04-5ad5-4a1f-b49b-9f38ea5be2da\") " pod="openstack/ovn-controller-djjzm" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.942804 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b814fe04-5ad5-4a1f-b49b-9f38ea5be2da-combined-ca-bundle\") pod \"ovn-controller-djjzm\" (UID: \"b814fe04-5ad5-4a1f-b49b-9f38ea5be2da\") " pod="openstack/ovn-controller-djjzm" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.942826 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/1778a60a-b3d9-4f16-a8d4-8c0adf54524f-var-log\") pod \"ovn-controller-ovs-sq2gq\" (UID: \"1778a60a-b3d9-4f16-a8d4-8c0adf54524f\") " pod="openstack/ovn-controller-ovs-sq2gq" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.942855 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b814fe04-5ad5-4a1f-b49b-9f38ea5be2da-ovn-controller-tls-certs\") pod \"ovn-controller-djjzm\" (UID: \"b814fe04-5ad5-4a1f-b49b-9f38ea5be2da\") " pod="openstack/ovn-controller-djjzm" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.942873 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l29t9\" (UniqueName: \"kubernetes.io/projected/b814fe04-5ad5-4a1f-b49b-9f38ea5be2da-kube-api-access-l29t9\") pod \"ovn-controller-djjzm\" (UID: \"b814fe04-5ad5-4a1f-b49b-9f38ea5be2da\") " pod="openstack/ovn-controller-djjzm" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.942895 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/1778a60a-b3d9-4f16-a8d4-8c0adf54524f-var-lib\") pod \"ovn-controller-ovs-sq2gq\" (UID: \"1778a60a-b3d9-4f16-a8d4-8c0adf54524f\") " pod="openstack/ovn-controller-ovs-sq2gq" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.943103 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1778a60a-b3d9-4f16-a8d4-8c0adf54524f-scripts\") pod \"ovn-controller-ovs-sq2gq\" (UID: \"1778a60a-b3d9-4f16-a8d4-8c0adf54524f\") " pod="openstack/ovn-controller-ovs-sq2gq" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.943122 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b814fe04-5ad5-4a1f-b49b-9f38ea5be2da-var-log-ovn\") pod \"ovn-controller-djjzm\" (UID: \"b814fe04-5ad5-4a1f-b49b-9f38ea5be2da\") " pod="openstack/ovn-controller-djjzm" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.943156 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b814fe04-5ad5-4a1f-b49b-9f38ea5be2da-scripts\") pod \"ovn-controller-djjzm\" (UID: \"b814fe04-5ad5-4a1f-b49b-9f38ea5be2da\") " pod="openstack/ovn-controller-djjzm" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.943908 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b814fe04-5ad5-4a1f-b49b-9f38ea5be2da-var-run\") pod \"ovn-controller-djjzm\" (UID: \"b814fe04-5ad5-4a1f-b49b-9f38ea5be2da\") " pod="openstack/ovn-controller-djjzm" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.944228 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/1778a60a-b3d9-4f16-a8d4-8c0adf54524f-etc-ovs\") pod \"ovn-controller-ovs-sq2gq\" (UID: \"1778a60a-b3d9-4f16-a8d4-8c0adf54524f\") " pod="openstack/ovn-controller-ovs-sq2gq" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.944288 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1778a60a-b3d9-4f16-a8d4-8c0adf54524f-var-run\") pod \"ovn-controller-ovs-sq2gq\" (UID: \"1778a60a-b3d9-4f16-a8d4-8c0adf54524f\") " pod="openstack/ovn-controller-ovs-sq2gq" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.944844 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/1778a60a-b3d9-4f16-a8d4-8c0adf54524f-var-log\") pod \"ovn-controller-ovs-sq2gq\" (UID: \"1778a60a-b3d9-4f16-a8d4-8c0adf54524f\") " pod="openstack/ovn-controller-ovs-sq2gq" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.944924 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b814fe04-5ad5-4a1f-b49b-9f38ea5be2da-var-run-ovn\") pod \"ovn-controller-djjzm\" (UID: \"b814fe04-5ad5-4a1f-b49b-9f38ea5be2da\") " pod="openstack/ovn-controller-djjzm" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.946873 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b814fe04-5ad5-4a1f-b49b-9f38ea5be2da-scripts\") pod \"ovn-controller-djjzm\" (UID: \"b814fe04-5ad5-4a1f-b49b-9f38ea5be2da\") " pod="openstack/ovn-controller-djjzm" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.947012 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b814fe04-5ad5-4a1f-b49b-9f38ea5be2da-var-log-ovn\") pod \"ovn-controller-djjzm\" (UID: \"b814fe04-5ad5-4a1f-b49b-9f38ea5be2da\") " pod="openstack/ovn-controller-djjzm" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.947144 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/1778a60a-b3d9-4f16-a8d4-8c0adf54524f-var-lib\") pod \"ovn-controller-ovs-sq2gq\" (UID: \"1778a60a-b3d9-4f16-a8d4-8c0adf54524f\") " pod="openstack/ovn-controller-ovs-sq2gq" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.950295 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1778a60a-b3d9-4f16-a8d4-8c0adf54524f-scripts\") pod \"ovn-controller-ovs-sq2gq\" (UID: \"1778a60a-b3d9-4f16-a8d4-8c0adf54524f\") " pod="openstack/ovn-controller-ovs-sq2gq" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.963636 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b814fe04-5ad5-4a1f-b49b-9f38ea5be2da-ovn-controller-tls-certs\") pod \"ovn-controller-djjzm\" (UID: \"b814fe04-5ad5-4a1f-b49b-9f38ea5be2da\") " pod="openstack/ovn-controller-djjzm" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.964053 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b814fe04-5ad5-4a1f-b49b-9f38ea5be2da-combined-ca-bundle\") pod \"ovn-controller-djjzm\" (UID: \"b814fe04-5ad5-4a1f-b49b-9f38ea5be2da\") " pod="openstack/ovn-controller-djjzm" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.966438 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l29t9\" (UniqueName: \"kubernetes.io/projected/b814fe04-5ad5-4a1f-b49b-9f38ea5be2da-kube-api-access-l29t9\") pod \"ovn-controller-djjzm\" (UID: \"b814fe04-5ad5-4a1f-b49b-9f38ea5be2da\") " pod="openstack/ovn-controller-djjzm" Jan 26 11:11:14 crc kubenswrapper[4619]: I0126 11:11:14.977380 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dwfj\" (UniqueName: \"kubernetes.io/projected/1778a60a-b3d9-4f16-a8d4-8c0adf54524f-kube-api-access-2dwfj\") pod \"ovn-controller-ovs-sq2gq\" (UID: \"1778a60a-b3d9-4f16-a8d4-8c0adf54524f\") " pod="openstack/ovn-controller-ovs-sq2gq" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.078646 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-djjzm" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.106255 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-sq2gq" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.395004 4619 generic.go:334] "Generic (PLEG): container finished" podID="f33a41bb-6406-4c73-8024-4acd72817832" containerID="1c10eca96de3abc38af2c9c686eee98e2b56a0138fc48edb624175300fb0caff" exitCode=0 Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.395044 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerDied","Data":"1c10eca96de3abc38af2c9c686eee98e2b56a0138fc48edb624175300fb0caff"} Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.395077 4619 scope.go:117] "RemoveContainer" containerID="6738b1743914672b9048dd0ac0796d6733f8691d82b90a8907b894bf9b7c51fb" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.571683 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.573483 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.576242 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.576330 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-m8flg" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.576253 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.576596 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.576735 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.607739 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.662596 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cc19a957-aa75-443a-bd3a-2696241ffbd1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"cc19a957-aa75-443a-bd3a-2696241ffbd1\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.662671 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nv79\" (UniqueName: \"kubernetes.io/projected/cc19a957-aa75-443a-bd3a-2696241ffbd1-kube-api-access-5nv79\") pod \"ovsdbserver-nb-0\" (UID: \"cc19a957-aa75-443a-bd3a-2696241ffbd1\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.662700 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc19a957-aa75-443a-bd3a-2696241ffbd1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cc19a957-aa75-443a-bd3a-2696241ffbd1\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.662756 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc19a957-aa75-443a-bd3a-2696241ffbd1-config\") pod \"ovsdbserver-nb-0\" (UID: \"cc19a957-aa75-443a-bd3a-2696241ffbd1\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.662791 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc19a957-aa75-443a-bd3a-2696241ffbd1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cc19a957-aa75-443a-bd3a-2696241ffbd1\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.662857 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cc19a957-aa75-443a-bd3a-2696241ffbd1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"cc19a957-aa75-443a-bd3a-2696241ffbd1\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.662878 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"cc19a957-aa75-443a-bd3a-2696241ffbd1\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.662970 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc19a957-aa75-443a-bd3a-2696241ffbd1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"cc19a957-aa75-443a-bd3a-2696241ffbd1\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.764780 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc19a957-aa75-443a-bd3a-2696241ffbd1-config\") pod \"ovsdbserver-nb-0\" (UID: \"cc19a957-aa75-443a-bd3a-2696241ffbd1\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.764907 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc19a957-aa75-443a-bd3a-2696241ffbd1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cc19a957-aa75-443a-bd3a-2696241ffbd1\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.764976 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cc19a957-aa75-443a-bd3a-2696241ffbd1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"cc19a957-aa75-443a-bd3a-2696241ffbd1\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.765000 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"cc19a957-aa75-443a-bd3a-2696241ffbd1\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.765016 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc19a957-aa75-443a-bd3a-2696241ffbd1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"cc19a957-aa75-443a-bd3a-2696241ffbd1\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.765046 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cc19a957-aa75-443a-bd3a-2696241ffbd1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"cc19a957-aa75-443a-bd3a-2696241ffbd1\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.765069 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nv79\" (UniqueName: \"kubernetes.io/projected/cc19a957-aa75-443a-bd3a-2696241ffbd1-kube-api-access-5nv79\") pod \"ovsdbserver-nb-0\" (UID: \"cc19a957-aa75-443a-bd3a-2696241ffbd1\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.765098 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc19a957-aa75-443a-bd3a-2696241ffbd1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cc19a957-aa75-443a-bd3a-2696241ffbd1\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.765779 4619 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"cc19a957-aa75-443a-bd3a-2696241ffbd1\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-nb-0" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.767280 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cc19a957-aa75-443a-bd3a-2696241ffbd1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"cc19a957-aa75-443a-bd3a-2696241ffbd1\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.768930 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cc19a957-aa75-443a-bd3a-2696241ffbd1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"cc19a957-aa75-443a-bd3a-2696241ffbd1\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.769101 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc19a957-aa75-443a-bd3a-2696241ffbd1-config\") pod \"ovsdbserver-nb-0\" (UID: \"cc19a957-aa75-443a-bd3a-2696241ffbd1\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.772637 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc19a957-aa75-443a-bd3a-2696241ffbd1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cc19a957-aa75-443a-bd3a-2696241ffbd1\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.773860 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc19a957-aa75-443a-bd3a-2696241ffbd1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"cc19a957-aa75-443a-bd3a-2696241ffbd1\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.775934 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc19a957-aa75-443a-bd3a-2696241ffbd1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cc19a957-aa75-443a-bd3a-2696241ffbd1\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.787457 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nv79\" (UniqueName: \"kubernetes.io/projected/cc19a957-aa75-443a-bd3a-2696241ffbd1-kube-api-access-5nv79\") pod \"ovsdbserver-nb-0\" (UID: \"cc19a957-aa75-443a-bd3a-2696241ffbd1\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.788364 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"cc19a957-aa75-443a-bd3a-2696241ffbd1\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:11:15 crc kubenswrapper[4619]: I0126 11:11:15.901878 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.494237 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.495868 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.533840 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.534176 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.535377 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-lm9zp" Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.535590 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.552707 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.602566 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"eaabd9be-2386-41dc-88ef-944ee93da789\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.602635 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaabd9be-2386-41dc-88ef-944ee93da789-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"eaabd9be-2386-41dc-88ef-944ee93da789\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.602657 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaabd9be-2386-41dc-88ef-944ee93da789-config\") pod \"ovsdbserver-sb-0\" (UID: \"eaabd9be-2386-41dc-88ef-944ee93da789\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.604389 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaabd9be-2386-41dc-88ef-944ee93da789-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"eaabd9be-2386-41dc-88ef-944ee93da789\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.604448 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpvjf\" (UniqueName: \"kubernetes.io/projected/eaabd9be-2386-41dc-88ef-944ee93da789-kube-api-access-xpvjf\") pod \"ovsdbserver-sb-0\" (UID: \"eaabd9be-2386-41dc-88ef-944ee93da789\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.604486 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaabd9be-2386-41dc-88ef-944ee93da789-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"eaabd9be-2386-41dc-88ef-944ee93da789\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.604519 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eaabd9be-2386-41dc-88ef-944ee93da789-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"eaabd9be-2386-41dc-88ef-944ee93da789\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.604542 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eaabd9be-2386-41dc-88ef-944ee93da789-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"eaabd9be-2386-41dc-88ef-944ee93da789\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.705776 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eaabd9be-2386-41dc-88ef-944ee93da789-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"eaabd9be-2386-41dc-88ef-944ee93da789\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.705820 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eaabd9be-2386-41dc-88ef-944ee93da789-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"eaabd9be-2386-41dc-88ef-944ee93da789\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.705960 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"eaabd9be-2386-41dc-88ef-944ee93da789\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.705990 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaabd9be-2386-41dc-88ef-944ee93da789-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"eaabd9be-2386-41dc-88ef-944ee93da789\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.706008 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaabd9be-2386-41dc-88ef-944ee93da789-config\") pod \"ovsdbserver-sb-0\" (UID: \"eaabd9be-2386-41dc-88ef-944ee93da789\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.706034 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaabd9be-2386-41dc-88ef-944ee93da789-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"eaabd9be-2386-41dc-88ef-944ee93da789\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.706071 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpvjf\" (UniqueName: \"kubernetes.io/projected/eaabd9be-2386-41dc-88ef-944ee93da789-kube-api-access-xpvjf\") pod \"ovsdbserver-sb-0\" (UID: \"eaabd9be-2386-41dc-88ef-944ee93da789\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.706100 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaabd9be-2386-41dc-88ef-944ee93da789-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"eaabd9be-2386-41dc-88ef-944ee93da789\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.706198 4619 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"eaabd9be-2386-41dc-88ef-944ee93da789\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-sb-0" Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.706845 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eaabd9be-2386-41dc-88ef-944ee93da789-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"eaabd9be-2386-41dc-88ef-944ee93da789\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.707991 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaabd9be-2386-41dc-88ef-944ee93da789-config\") pod \"ovsdbserver-sb-0\" (UID: \"eaabd9be-2386-41dc-88ef-944ee93da789\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.715023 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eaabd9be-2386-41dc-88ef-944ee93da789-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"eaabd9be-2386-41dc-88ef-944ee93da789\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.720890 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaabd9be-2386-41dc-88ef-944ee93da789-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"eaabd9be-2386-41dc-88ef-944ee93da789\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.721459 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaabd9be-2386-41dc-88ef-944ee93da789-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"eaabd9be-2386-41dc-88ef-944ee93da789\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.721695 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaabd9be-2386-41dc-88ef-944ee93da789-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"eaabd9be-2386-41dc-88ef-944ee93da789\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.724018 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpvjf\" (UniqueName: \"kubernetes.io/projected/eaabd9be-2386-41dc-88ef-944ee93da789-kube-api-access-xpvjf\") pod \"ovsdbserver-sb-0\" (UID: \"eaabd9be-2386-41dc-88ef-944ee93da789\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.725317 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"eaabd9be-2386-41dc-88ef-944ee93da789\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:11:17 crc kubenswrapper[4619]: I0126 11:11:17.855188 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 11:11:25 crc kubenswrapper[4619]: I0126 11:11:25.695235 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-djjzm"] Jan 26 11:11:33 crc kubenswrapper[4619]: I0126 11:11:33.537725 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-djjzm" event={"ID":"b814fe04-5ad5-4a1f-b49b-9f38ea5be2da","Type":"ContainerStarted","Data":"9a6683dd7d982aa4c24c56b6181a29cc3594259842542f71755721120f3de4d5"} Jan 26 11:11:33 crc kubenswrapper[4619]: I0126 11:11:33.808591 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 11:11:34 crc kubenswrapper[4619]: E0126 11:11:34.193850 4619 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 26 11:11:34 crc kubenswrapper[4619]: E0126 11:11:34.194251 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kh54k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-24r9j_openstack(8a8c3517-34b1-4844-847a-5da9969a71b3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:11:34 crc kubenswrapper[4619]: E0126 11:11:34.195378 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-24r9j" podUID="8a8c3517-34b1-4844-847a-5da9969a71b3" Jan 26 11:11:34 crc kubenswrapper[4619]: E0126 11:11:34.224905 4619 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 26 11:11:34 crc kubenswrapper[4619]: E0126 11:11:34.225083 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2tzbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-s8lf4_openstack(cbd9dce7-cf00-4b71-855b-f8d1fbe41736): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:11:34 crc kubenswrapper[4619]: E0126 11:11:34.226544 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-s8lf4" podUID="cbd9dce7-cf00-4b71-855b-f8d1fbe41736" Jan 26 11:11:34 crc kubenswrapper[4619]: E0126 11:11:34.229587 4619 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 26 11:11:34 crc kubenswrapper[4619]: E0126 11:11:34.229771 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b9gs4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-bpxpr_openstack(0545a0e4-aa91-4ac3-b971-262dfbb92d9e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:11:34 crc kubenswrapper[4619]: E0126 11:11:34.231390 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-bpxpr" podUID="0545a0e4-aa91-4ac3-b971-262dfbb92d9e" Jan 26 11:11:34 crc kubenswrapper[4619]: E0126 11:11:34.237165 4619 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 26 11:11:34 crc kubenswrapper[4619]: E0126 11:11:34.237332 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qdtkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-d565h_openstack(5471b09c-ff65-46bc-8d3e-27fe2f881646): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:11:34 crc kubenswrapper[4619]: E0126 11:11:34.238478 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-d565h" podUID="5471b09c-ff65-46bc-8d3e-27fe2f881646" Jan 26 11:11:34 crc kubenswrapper[4619]: I0126 11:11:34.551720 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d4e064ab-47fc-497d-b783-9debc84b2c7a","Type":"ContainerStarted","Data":"d9fc65d2f614fa37d0ba46856ab1e36daa6456ec8ef07300a581af861cc9224e"} Jan 26 11:11:34 crc kubenswrapper[4619]: E0126 11:11:34.556151 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-s8lf4" podUID="cbd9dce7-cf00-4b71-855b-f8d1fbe41736" Jan 26 11:11:34 crc kubenswrapper[4619]: E0126 11:11:34.556386 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-d565h" podUID="5471b09c-ff65-46bc-8d3e-27fe2f881646" Jan 26 11:11:35 crc kubenswrapper[4619]: I0126 11:11:35.214285 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 11:11:35 crc kubenswrapper[4619]: I0126 11:11:35.558117 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8f5811c2-1a5b-4fc0-aa98-a6604f266891","Type":"ContainerStarted","Data":"5845f3e81c5b8d6fb3bc6c6f5fa8da5e97602ce2237a0801a034835d25a5fa69"} Jan 26 11:11:35 crc kubenswrapper[4619]: I0126 11:11:35.566264 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8fdb3f80-9734-437b-94c1-6abcc8ce995f","Type":"ContainerStarted","Data":"6e81373d8238d67597d35d219e5a22ace321bf76cdd1727a6bd76a9cf20128a0"} Jan 26 11:11:35 crc kubenswrapper[4619]: I0126 11:11:35.566461 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 26 11:11:35 crc kubenswrapper[4619]: I0126 11:11:35.568397 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"675ad44b-ca9d-4f4c-947b-06184a5db736","Type":"ContainerStarted","Data":"5b9d0e0f5d36a41998c1f94af85b6f0fd9ac480a086eb0557ed30a1baf901324"} Jan 26 11:11:35 crc kubenswrapper[4619]: I0126 11:11:35.571766 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerStarted","Data":"acb7965272930c0e5aeb32299fd66f4070cac2661e0eb68cc61aedd3e0ea08f9"} Jan 26 11:11:35 crc kubenswrapper[4619]: I0126 11:11:35.668448 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.286053399 podStartE2EDuration="27.668422428s" podCreationTimestamp="2026-01-26 11:11:08 +0000 UTC" firstStartedPulling="2026-01-26 11:11:09.784813065 +0000 UTC m=+968.818853781" lastFinishedPulling="2026-01-26 11:11:34.167182094 +0000 UTC m=+993.201222810" observedRunningTime="2026-01-26 11:11:35.658977975 +0000 UTC m=+994.693018691" watchObservedRunningTime="2026-01-26 11:11:35.668422428 +0000 UTC m=+994.702463144" Jan 26 11:11:35 crc kubenswrapper[4619]: I0126 11:11:35.746116 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-24r9j" Jan 26 11:11:35 crc kubenswrapper[4619]: I0126 11:11:35.774965 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-bpxpr" Jan 26 11:11:35 crc kubenswrapper[4619]: I0126 11:11:35.843775 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0545a0e4-aa91-4ac3-b971-262dfbb92d9e-config\") pod \"0545a0e4-aa91-4ac3-b971-262dfbb92d9e\" (UID: \"0545a0e4-aa91-4ac3-b971-262dfbb92d9e\") " Jan 26 11:11:35 crc kubenswrapper[4619]: I0126 11:11:35.843877 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9gs4\" (UniqueName: \"kubernetes.io/projected/0545a0e4-aa91-4ac3-b971-262dfbb92d9e-kube-api-access-b9gs4\") pod \"0545a0e4-aa91-4ac3-b971-262dfbb92d9e\" (UID: \"0545a0e4-aa91-4ac3-b971-262dfbb92d9e\") " Jan 26 11:11:35 crc kubenswrapper[4619]: I0126 11:11:35.843988 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0545a0e4-aa91-4ac3-b971-262dfbb92d9e-dns-svc\") pod \"0545a0e4-aa91-4ac3-b971-262dfbb92d9e\" (UID: \"0545a0e4-aa91-4ac3-b971-262dfbb92d9e\") " Jan 26 11:11:35 crc kubenswrapper[4619]: I0126 11:11:35.844033 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kh54k\" (UniqueName: \"kubernetes.io/projected/8a8c3517-34b1-4844-847a-5da9969a71b3-kube-api-access-kh54k\") pod \"8a8c3517-34b1-4844-847a-5da9969a71b3\" (UID: \"8a8c3517-34b1-4844-847a-5da9969a71b3\") " Jan 26 11:11:35 crc kubenswrapper[4619]: I0126 11:11:35.844105 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a8c3517-34b1-4844-847a-5da9969a71b3-config\") pod \"8a8c3517-34b1-4844-847a-5da9969a71b3\" (UID: \"8a8c3517-34b1-4844-847a-5da9969a71b3\") " Jan 26 11:11:35 crc kubenswrapper[4619]: I0126 11:11:35.844373 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0545a0e4-aa91-4ac3-b971-262dfbb92d9e-config" (OuterVolumeSpecName: "config") pod "0545a0e4-aa91-4ac3-b971-262dfbb92d9e" (UID: "0545a0e4-aa91-4ac3-b971-262dfbb92d9e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:11:35 crc kubenswrapper[4619]: I0126 11:11:35.844838 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0545a0e4-aa91-4ac3-b971-262dfbb92d9e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0545a0e4-aa91-4ac3-b971-262dfbb92d9e" (UID: "0545a0e4-aa91-4ac3-b971-262dfbb92d9e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:11:35 crc kubenswrapper[4619]: I0126 11:11:35.845460 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0545a0e4-aa91-4ac3-b971-262dfbb92d9e-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:11:35 crc kubenswrapper[4619]: I0126 11:11:35.845485 4619 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0545a0e4-aa91-4ac3-b971-262dfbb92d9e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 11:11:35 crc kubenswrapper[4619]: I0126 11:11:35.847839 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a8c3517-34b1-4844-847a-5da9969a71b3-config" (OuterVolumeSpecName: "config") pod "8a8c3517-34b1-4844-847a-5da9969a71b3" (UID: "8a8c3517-34b1-4844-847a-5da9969a71b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:11:35 crc kubenswrapper[4619]: I0126 11:11:35.853321 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a8c3517-34b1-4844-847a-5da9969a71b3-kube-api-access-kh54k" (OuterVolumeSpecName: "kube-api-access-kh54k") pod "8a8c3517-34b1-4844-847a-5da9969a71b3" (UID: "8a8c3517-34b1-4844-847a-5da9969a71b3"). InnerVolumeSpecName "kube-api-access-kh54k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:11:35 crc kubenswrapper[4619]: I0126 11:11:35.859951 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0545a0e4-aa91-4ac3-b971-262dfbb92d9e-kube-api-access-b9gs4" (OuterVolumeSpecName: "kube-api-access-b9gs4") pod "0545a0e4-aa91-4ac3-b971-262dfbb92d9e" (UID: "0545a0e4-aa91-4ac3-b971-262dfbb92d9e"). InnerVolumeSpecName "kube-api-access-b9gs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:11:35 crc kubenswrapper[4619]: I0126 11:11:35.895584 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 11:11:35 crc kubenswrapper[4619]: I0126 11:11:35.946906 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kh54k\" (UniqueName: \"kubernetes.io/projected/8a8c3517-34b1-4844-847a-5da9969a71b3-kube-api-access-kh54k\") on node \"crc\" DevicePath \"\"" Jan 26 11:11:35 crc kubenswrapper[4619]: I0126 11:11:35.946952 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a8c3517-34b1-4844-847a-5da9969a71b3-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:11:35 crc kubenswrapper[4619]: I0126 11:11:35.946963 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9gs4\" (UniqueName: \"kubernetes.io/projected/0545a0e4-aa91-4ac3-b971-262dfbb92d9e-kube-api-access-b9gs4\") on node \"crc\" DevicePath \"\"" Jan 26 11:11:36 crc kubenswrapper[4619]: I0126 11:11:36.285959 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-sq2gq"] Jan 26 11:11:36 crc kubenswrapper[4619]: W0126 11:11:36.337583 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1778a60a_b3d9_4f16_a8d4_8c0adf54524f.slice/crio-6b621c52a6839249c33b961b26c3936ab9e91368844d1e65f94b87d129a8aa07 WatchSource:0}: Error finding container 6b621c52a6839249c33b961b26c3936ab9e91368844d1e65f94b87d129a8aa07: Status 404 returned error can't find the container with id 6b621c52a6839249c33b961b26c3936ab9e91368844d1e65f94b87d129a8aa07 Jan 26 11:11:36 crc kubenswrapper[4619]: I0126 11:11:36.579465 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-bpxpr" Jan 26 11:11:36 crc kubenswrapper[4619]: I0126 11:11:36.579508 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-bpxpr" event={"ID":"0545a0e4-aa91-4ac3-b971-262dfbb92d9e","Type":"ContainerDied","Data":"2c1d7cb2070e99f53eda7eecdba61f2cb2f2db390e3c6fa4cc5637abb76f845a"} Jan 26 11:11:36 crc kubenswrapper[4619]: I0126 11:11:36.581726 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-24r9j" event={"ID":"8a8c3517-34b1-4844-847a-5da9969a71b3","Type":"ContainerDied","Data":"66bd3980318c91649226ff2c9be200976ddd96b4867b00d0e1197f8456a48f94"} Jan 26 11:11:36 crc kubenswrapper[4619]: I0126 11:11:36.581943 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-24r9j" Jan 26 11:11:36 crc kubenswrapper[4619]: I0126 11:11:36.586749 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"cc19a957-aa75-443a-bd3a-2696241ffbd1","Type":"ContainerStarted","Data":"30e648bc66dcfc5753dbe452732cb810a68b25f2e6bfeb370d5d3d11c5930abd"} Jan 26 11:11:36 crc kubenswrapper[4619]: I0126 11:11:36.589293 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"213a8fd2-1f05-4287-b7f2-dfcd18d94399","Type":"ContainerStarted","Data":"1be2f241afcc7c99d6a50d16e5bdf2fc4f62796cdc929b249b68cf1ef5a4a22b"} Jan 26 11:11:36 crc kubenswrapper[4619]: I0126 11:11:36.604203 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sq2gq" event={"ID":"1778a60a-b3d9-4f16-a8d4-8c0adf54524f","Type":"ContainerStarted","Data":"6b621c52a6839249c33b961b26c3936ab9e91368844d1e65f94b87d129a8aa07"} Jan 26 11:11:36 crc kubenswrapper[4619]: I0126 11:11:36.605673 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"eaabd9be-2386-41dc-88ef-944ee93da789","Type":"ContainerStarted","Data":"5df37af160ee5cb476c1c8ce4fd7e008ec71354ed91de564b8723781ade948b5"} Jan 26 11:11:36 crc kubenswrapper[4619]: I0126 11:11:36.611600 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf","Type":"ContainerStarted","Data":"8bf18e3384d5a877bc084315a99d0bd326ced764710cfac705eae40f57cfe4f0"} Jan 26 11:11:36 crc kubenswrapper[4619]: I0126 11:11:36.699853 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-bpxpr"] Jan 26 11:11:36 crc kubenswrapper[4619]: I0126 11:11:36.708141 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-bpxpr"] Jan 26 11:11:36 crc kubenswrapper[4619]: I0126 11:11:36.739292 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-24r9j"] Jan 26 11:11:36 crc kubenswrapper[4619]: I0126 11:11:36.746142 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-24r9j"] Jan 26 11:11:37 crc kubenswrapper[4619]: I0126 11:11:37.275641 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0545a0e4-aa91-4ac3-b971-262dfbb92d9e" path="/var/lib/kubelet/pods/0545a0e4-aa91-4ac3-b971-262dfbb92d9e/volumes" Jan 26 11:11:37 crc kubenswrapper[4619]: I0126 11:11:37.276430 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a8c3517-34b1-4844-847a-5da9969a71b3" path="/var/lib/kubelet/pods/8a8c3517-34b1-4844-847a-5da9969a71b3/volumes" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.015865 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-gbr5p"] Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.018690 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-gbr5p" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.025868 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.066800 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-gbr5p"] Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.096040 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc-config\") pod \"ovn-controller-metrics-gbr5p\" (UID: \"2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc\") " pod="openstack/ovn-controller-metrics-gbr5p" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.096091 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx5kw\" (UniqueName: \"kubernetes.io/projected/2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc-kube-api-access-qx5kw\") pod \"ovn-controller-metrics-gbr5p\" (UID: \"2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc\") " pod="openstack/ovn-controller-metrics-gbr5p" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.096134 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-gbr5p\" (UID: \"2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc\") " pod="openstack/ovn-controller-metrics-gbr5p" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.096166 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc-combined-ca-bundle\") pod \"ovn-controller-metrics-gbr5p\" (UID: \"2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc\") " pod="openstack/ovn-controller-metrics-gbr5p" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.096203 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc-ovs-rundir\") pod \"ovn-controller-metrics-gbr5p\" (UID: \"2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc\") " pod="openstack/ovn-controller-metrics-gbr5p" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.096262 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc-ovn-rundir\") pod \"ovn-controller-metrics-gbr5p\" (UID: \"2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc\") " pod="openstack/ovn-controller-metrics-gbr5p" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.199571 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-gbr5p\" (UID: \"2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc\") " pod="openstack/ovn-controller-metrics-gbr5p" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.199688 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc-combined-ca-bundle\") pod \"ovn-controller-metrics-gbr5p\" (UID: \"2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc\") " pod="openstack/ovn-controller-metrics-gbr5p" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.199737 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc-ovs-rundir\") pod \"ovn-controller-metrics-gbr5p\" (UID: \"2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc\") " pod="openstack/ovn-controller-metrics-gbr5p" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.199799 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc-ovn-rundir\") pod \"ovn-controller-metrics-gbr5p\" (UID: \"2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc\") " pod="openstack/ovn-controller-metrics-gbr5p" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.199846 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc-config\") pod \"ovn-controller-metrics-gbr5p\" (UID: \"2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc\") " pod="openstack/ovn-controller-metrics-gbr5p" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.199866 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx5kw\" (UniqueName: \"kubernetes.io/projected/2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc-kube-api-access-qx5kw\") pod \"ovn-controller-metrics-gbr5p\" (UID: \"2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc\") " pod="openstack/ovn-controller-metrics-gbr5p" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.200391 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc-ovs-rundir\") pod \"ovn-controller-metrics-gbr5p\" (UID: \"2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc\") " pod="openstack/ovn-controller-metrics-gbr5p" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.200468 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc-ovn-rundir\") pod \"ovn-controller-metrics-gbr5p\" (UID: \"2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc\") " pod="openstack/ovn-controller-metrics-gbr5p" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.201777 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc-config\") pod \"ovn-controller-metrics-gbr5p\" (UID: \"2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc\") " pod="openstack/ovn-controller-metrics-gbr5p" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.206728 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-gbr5p\" (UID: \"2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc\") " pod="openstack/ovn-controller-metrics-gbr5p" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.209264 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc-combined-ca-bundle\") pod \"ovn-controller-metrics-gbr5p\" (UID: \"2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc\") " pod="openstack/ovn-controller-metrics-gbr5p" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.228177 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx5kw\" (UniqueName: \"kubernetes.io/projected/2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc-kube-api-access-qx5kw\") pod \"ovn-controller-metrics-gbr5p\" (UID: \"2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc\") " pod="openstack/ovn-controller-metrics-gbr5p" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.338666 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-d565h"] Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.352989 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-gbr5p" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.392286 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-2xlg4"] Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.393405 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-2xlg4" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.396319 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.452127 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-2xlg4"] Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.505463 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg7jg\" (UniqueName: \"kubernetes.io/projected/b44829fb-057c-442c-9319-4b37d7ccf1b0-kube-api-access-mg7jg\") pod \"dnsmasq-dns-7fd796d7df-2xlg4\" (UID: \"b44829fb-057c-442c-9319-4b37d7ccf1b0\") " pod="openstack/dnsmasq-dns-7fd796d7df-2xlg4" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.505521 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b44829fb-057c-442c-9319-4b37d7ccf1b0-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-2xlg4\" (UID: \"b44829fb-057c-442c-9319-4b37d7ccf1b0\") " pod="openstack/dnsmasq-dns-7fd796d7df-2xlg4" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.505545 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b44829fb-057c-442c-9319-4b37d7ccf1b0-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-2xlg4\" (UID: \"b44829fb-057c-442c-9319-4b37d7ccf1b0\") " pod="openstack/dnsmasq-dns-7fd796d7df-2xlg4" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.505730 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b44829fb-057c-442c-9319-4b37d7ccf1b0-config\") pod \"dnsmasq-dns-7fd796d7df-2xlg4\" (UID: \"b44829fb-057c-442c-9319-4b37d7ccf1b0\") " pod="openstack/dnsmasq-dns-7fd796d7df-2xlg4" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.607606 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b44829fb-057c-442c-9319-4b37d7ccf1b0-config\") pod \"dnsmasq-dns-7fd796d7df-2xlg4\" (UID: \"b44829fb-057c-442c-9319-4b37d7ccf1b0\") " pod="openstack/dnsmasq-dns-7fd796d7df-2xlg4" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.607740 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg7jg\" (UniqueName: \"kubernetes.io/projected/b44829fb-057c-442c-9319-4b37d7ccf1b0-kube-api-access-mg7jg\") pod \"dnsmasq-dns-7fd796d7df-2xlg4\" (UID: \"b44829fb-057c-442c-9319-4b37d7ccf1b0\") " pod="openstack/dnsmasq-dns-7fd796d7df-2xlg4" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.607780 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b44829fb-057c-442c-9319-4b37d7ccf1b0-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-2xlg4\" (UID: \"b44829fb-057c-442c-9319-4b37d7ccf1b0\") " pod="openstack/dnsmasq-dns-7fd796d7df-2xlg4" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.607801 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b44829fb-057c-442c-9319-4b37d7ccf1b0-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-2xlg4\" (UID: \"b44829fb-057c-442c-9319-4b37d7ccf1b0\") " pod="openstack/dnsmasq-dns-7fd796d7df-2xlg4" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.609003 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b44829fb-057c-442c-9319-4b37d7ccf1b0-config\") pod \"dnsmasq-dns-7fd796d7df-2xlg4\" (UID: \"b44829fb-057c-442c-9319-4b37d7ccf1b0\") " pod="openstack/dnsmasq-dns-7fd796d7df-2xlg4" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.609082 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b44829fb-057c-442c-9319-4b37d7ccf1b0-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-2xlg4\" (UID: \"b44829fb-057c-442c-9319-4b37d7ccf1b0\") " pod="openstack/dnsmasq-dns-7fd796d7df-2xlg4" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.609448 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b44829fb-057c-442c-9319-4b37d7ccf1b0-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-2xlg4\" (UID: \"b44829fb-057c-442c-9319-4b37d7ccf1b0\") " pod="openstack/dnsmasq-dns-7fd796d7df-2xlg4" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.649600 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg7jg\" (UniqueName: \"kubernetes.io/projected/b44829fb-057c-442c-9319-4b37d7ccf1b0-kube-api-access-mg7jg\") pod \"dnsmasq-dns-7fd796d7df-2xlg4\" (UID: \"b44829fb-057c-442c-9319-4b37d7ccf1b0\") " pod="openstack/dnsmasq-dns-7fd796d7df-2xlg4" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.724003 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-s8lf4"] Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.725832 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-2xlg4" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.865317 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-c2zcw"] Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.866433 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-c2zcw" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.869539 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 26 11:11:38 crc kubenswrapper[4619]: I0126 11:11:38.915806 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-c2zcw"] Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.017646 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bckzj\" (UniqueName: \"kubernetes.io/projected/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3-kube-api-access-bckzj\") pod \"dnsmasq-dns-86db49b7ff-c2zcw\" (UID: \"664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3\") " pod="openstack/dnsmasq-dns-86db49b7ff-c2zcw" Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.017690 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-c2zcw\" (UID: \"664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3\") " pod="openstack/dnsmasq-dns-86db49b7ff-c2zcw" Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.017729 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-c2zcw\" (UID: \"664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3\") " pod="openstack/dnsmasq-dns-86db49b7ff-c2zcw" Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.017768 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3-config\") pod \"dnsmasq-dns-86db49b7ff-c2zcw\" (UID: \"664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3\") " pod="openstack/dnsmasq-dns-86db49b7ff-c2zcw" Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.017799 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-c2zcw\" (UID: \"664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3\") " pod="openstack/dnsmasq-dns-86db49b7ff-c2zcw" Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.120151 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-c2zcw\" (UID: \"664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3\") " pod="openstack/dnsmasq-dns-86db49b7ff-c2zcw" Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.120657 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bckzj\" (UniqueName: \"kubernetes.io/projected/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3-kube-api-access-bckzj\") pod \"dnsmasq-dns-86db49b7ff-c2zcw\" (UID: \"664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3\") " pod="openstack/dnsmasq-dns-86db49b7ff-c2zcw" Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.120722 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-c2zcw\" (UID: \"664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3\") " pod="openstack/dnsmasq-dns-86db49b7ff-c2zcw" Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.120758 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-c2zcw\" (UID: \"664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3\") " pod="openstack/dnsmasq-dns-86db49b7ff-c2zcw" Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.120809 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3-config\") pod \"dnsmasq-dns-86db49b7ff-c2zcw\" (UID: \"664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3\") " pod="openstack/dnsmasq-dns-86db49b7ff-c2zcw" Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.129231 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-c2zcw\" (UID: \"664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3\") " pod="openstack/dnsmasq-dns-86db49b7ff-c2zcw" Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.129913 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-c2zcw\" (UID: \"664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3\") " pod="openstack/dnsmasq-dns-86db49b7ff-c2zcw" Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.130740 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.130800 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3-config\") pod \"dnsmasq-dns-86db49b7ff-c2zcw\" (UID: \"664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3\") " pod="openstack/dnsmasq-dns-86db49b7ff-c2zcw" Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.130883 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-c2zcw\" (UID: \"664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3\") " pod="openstack/dnsmasq-dns-86db49b7ff-c2zcw" Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.162876 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bckzj\" (UniqueName: \"kubernetes.io/projected/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3-kube-api-access-bckzj\") pod \"dnsmasq-dns-86db49b7ff-c2zcw\" (UID: \"664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3\") " pod="openstack/dnsmasq-dns-86db49b7ff-c2zcw" Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.261504 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-d565h" Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.262563 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-c2zcw" Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.325041 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5471b09c-ff65-46bc-8d3e-27fe2f881646-dns-svc\") pod \"5471b09c-ff65-46bc-8d3e-27fe2f881646\" (UID: \"5471b09c-ff65-46bc-8d3e-27fe2f881646\") " Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.325342 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdtkt\" (UniqueName: \"kubernetes.io/projected/5471b09c-ff65-46bc-8d3e-27fe2f881646-kube-api-access-qdtkt\") pod \"5471b09c-ff65-46bc-8d3e-27fe2f881646\" (UID: \"5471b09c-ff65-46bc-8d3e-27fe2f881646\") " Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.325427 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5471b09c-ff65-46bc-8d3e-27fe2f881646-config\") pod \"5471b09c-ff65-46bc-8d3e-27fe2f881646\" (UID: \"5471b09c-ff65-46bc-8d3e-27fe2f881646\") " Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.325731 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5471b09c-ff65-46bc-8d3e-27fe2f881646-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5471b09c-ff65-46bc-8d3e-27fe2f881646" (UID: "5471b09c-ff65-46bc-8d3e-27fe2f881646"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.326177 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5471b09c-ff65-46bc-8d3e-27fe2f881646-config" (OuterVolumeSpecName: "config") pod "5471b09c-ff65-46bc-8d3e-27fe2f881646" (UID: "5471b09c-ff65-46bc-8d3e-27fe2f881646"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.326551 4619 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5471b09c-ff65-46bc-8d3e-27fe2f881646-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.326575 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5471b09c-ff65-46bc-8d3e-27fe2f881646-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.329374 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5471b09c-ff65-46bc-8d3e-27fe2f881646-kube-api-access-qdtkt" (OuterVolumeSpecName: "kube-api-access-qdtkt") pod "5471b09c-ff65-46bc-8d3e-27fe2f881646" (UID: "5471b09c-ff65-46bc-8d3e-27fe2f881646"). InnerVolumeSpecName "kube-api-access-qdtkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.428984 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdtkt\" (UniqueName: \"kubernetes.io/projected/5471b09c-ff65-46bc-8d3e-27fe2f881646-kube-api-access-qdtkt\") on node \"crc\" DevicePath \"\"" Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.651568 4619 generic.go:334] "Generic (PLEG): container finished" podID="8f5811c2-1a5b-4fc0-aa98-a6604f266891" containerID="5845f3e81c5b8d6fb3bc6c6f5fa8da5e97602ce2237a0801a034835d25a5fa69" exitCode=0 Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.651665 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8f5811c2-1a5b-4fc0-aa98-a6604f266891","Type":"ContainerDied","Data":"5845f3e81c5b8d6fb3bc6c6f5fa8da5e97602ce2237a0801a034835d25a5fa69"} Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.653514 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-d565h" event={"ID":"5471b09c-ff65-46bc-8d3e-27fe2f881646","Type":"ContainerDied","Data":"2b0625b1ed4e5622b90760edc6e47db6307b69000c4f7abf9521c539100679d9"} Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.653638 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-d565h" Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.716937 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-d565h"] Jan 26 11:11:39 crc kubenswrapper[4619]: I0126 11:11:39.724703 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-d565h"] Jan 26 11:11:40 crc kubenswrapper[4619]: I0126 11:11:40.230001 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-s8lf4" Jan 26 11:11:40 crc kubenswrapper[4619]: I0126 11:11:40.348569 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cbd9dce7-cf00-4b71-855b-f8d1fbe41736-dns-svc\") pod \"cbd9dce7-cf00-4b71-855b-f8d1fbe41736\" (UID: \"cbd9dce7-cf00-4b71-855b-f8d1fbe41736\") " Jan 26 11:11:40 crc kubenswrapper[4619]: I0126 11:11:40.348805 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tzbh\" (UniqueName: \"kubernetes.io/projected/cbd9dce7-cf00-4b71-855b-f8d1fbe41736-kube-api-access-2tzbh\") pod \"cbd9dce7-cf00-4b71-855b-f8d1fbe41736\" (UID: \"cbd9dce7-cf00-4b71-855b-f8d1fbe41736\") " Jan 26 11:11:40 crc kubenswrapper[4619]: I0126 11:11:40.348922 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbd9dce7-cf00-4b71-855b-f8d1fbe41736-config\") pod \"cbd9dce7-cf00-4b71-855b-f8d1fbe41736\" (UID: \"cbd9dce7-cf00-4b71-855b-f8d1fbe41736\") " Jan 26 11:11:40 crc kubenswrapper[4619]: I0126 11:11:40.350061 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbd9dce7-cf00-4b71-855b-f8d1fbe41736-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cbd9dce7-cf00-4b71-855b-f8d1fbe41736" (UID: "cbd9dce7-cf00-4b71-855b-f8d1fbe41736"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:11:40 crc kubenswrapper[4619]: I0126 11:11:40.351302 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbd9dce7-cf00-4b71-855b-f8d1fbe41736-config" (OuterVolumeSpecName: "config") pod "cbd9dce7-cf00-4b71-855b-f8d1fbe41736" (UID: "cbd9dce7-cf00-4b71-855b-f8d1fbe41736"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:11:40 crc kubenswrapper[4619]: I0126 11:11:40.354203 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbd9dce7-cf00-4b71-855b-f8d1fbe41736-kube-api-access-2tzbh" (OuterVolumeSpecName: "kube-api-access-2tzbh") pod "cbd9dce7-cf00-4b71-855b-f8d1fbe41736" (UID: "cbd9dce7-cf00-4b71-855b-f8d1fbe41736"). InnerVolumeSpecName "kube-api-access-2tzbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:11:40 crc kubenswrapper[4619]: I0126 11:11:40.451100 4619 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cbd9dce7-cf00-4b71-855b-f8d1fbe41736-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 11:11:40 crc kubenswrapper[4619]: I0126 11:11:40.451120 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tzbh\" (UniqueName: \"kubernetes.io/projected/cbd9dce7-cf00-4b71-855b-f8d1fbe41736-kube-api-access-2tzbh\") on node \"crc\" DevicePath \"\"" Jan 26 11:11:40 crc kubenswrapper[4619]: I0126 11:11:40.451131 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbd9dce7-cf00-4b71-855b-f8d1fbe41736-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:11:40 crc kubenswrapper[4619]: I0126 11:11:40.663681 4619 generic.go:334] "Generic (PLEG): container finished" podID="675ad44b-ca9d-4f4c-947b-06184a5db736" containerID="5b9d0e0f5d36a41998c1f94af85b6f0fd9ac480a086eb0557ed30a1baf901324" exitCode=0 Jan 26 11:11:40 crc kubenswrapper[4619]: I0126 11:11:40.663770 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"675ad44b-ca9d-4f4c-947b-06184a5db736","Type":"ContainerDied","Data":"5b9d0e0f5d36a41998c1f94af85b6f0fd9ac480a086eb0557ed30a1baf901324"} Jan 26 11:11:40 crc kubenswrapper[4619]: I0126 11:11:40.666736 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-s8lf4" event={"ID":"cbd9dce7-cf00-4b71-855b-f8d1fbe41736","Type":"ContainerDied","Data":"0a6d8b1a6e02b5f4079620f1f6a6fc12e71f5a21c02646d1a42db5022889672f"} Jan 26 11:11:40 crc kubenswrapper[4619]: I0126 11:11:40.666816 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-s8lf4" Jan 26 11:11:40 crc kubenswrapper[4619]: I0126 11:11:40.737249 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-s8lf4"] Jan 26 11:11:40 crc kubenswrapper[4619]: I0126 11:11:40.740746 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-s8lf4"] Jan 26 11:11:40 crc kubenswrapper[4619]: I0126 11:11:40.789554 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-gbr5p"] Jan 26 11:11:40 crc kubenswrapper[4619]: W0126 11:11:40.848763 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c3f919c_3dd6_4aaf_bfd5_468a33b37fdc.slice/crio-a579f4f6129dc4ec5cdbfd3afc871cec57594f06e7966d4839ffee5e8a0f4905 WatchSource:0}: Error finding container a579f4f6129dc4ec5cdbfd3afc871cec57594f06e7966d4839ffee5e8a0f4905: Status 404 returned error can't find the container with id a579f4f6129dc4ec5cdbfd3afc871cec57594f06e7966d4839ffee5e8a0f4905 Jan 26 11:11:41 crc kubenswrapper[4619]: I0126 11:11:41.200534 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-2xlg4"] Jan 26 11:11:41 crc kubenswrapper[4619]: I0126 11:11:41.241300 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-j8vh8"] Jan 26 11:11:41 crc kubenswrapper[4619]: I0126 11:11:41.242469 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-j8vh8" Jan 26 11:11:41 crc kubenswrapper[4619]: I0126 11:11:41.267230 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-j8vh8\" (UID: \"9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb\") " pod="openstack/dnsmasq-dns-698758b865-j8vh8" Jan 26 11:11:41 crc kubenswrapper[4619]: I0126 11:11:41.267456 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb-dns-svc\") pod \"dnsmasq-dns-698758b865-j8vh8\" (UID: \"9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb\") " pod="openstack/dnsmasq-dns-698758b865-j8vh8" Jan 26 11:11:41 crc kubenswrapper[4619]: I0126 11:11:41.267524 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb-config\") pod \"dnsmasq-dns-698758b865-j8vh8\" (UID: \"9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb\") " pod="openstack/dnsmasq-dns-698758b865-j8vh8" Jan 26 11:11:41 crc kubenswrapper[4619]: I0126 11:11:41.267549 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skjbv\" (UniqueName: \"kubernetes.io/projected/9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb-kube-api-access-skjbv\") pod \"dnsmasq-dns-698758b865-j8vh8\" (UID: \"9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb\") " pod="openstack/dnsmasq-dns-698758b865-j8vh8" Jan 26 11:11:41 crc kubenswrapper[4619]: I0126 11:11:41.267640 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-j8vh8\" (UID: \"9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb\") " pod="openstack/dnsmasq-dns-698758b865-j8vh8" Jan 26 11:11:41 crc kubenswrapper[4619]: I0126 11:11:41.279121 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5471b09c-ff65-46bc-8d3e-27fe2f881646" path="/var/lib/kubelet/pods/5471b09c-ff65-46bc-8d3e-27fe2f881646/volumes" Jan 26 11:11:41 crc kubenswrapper[4619]: I0126 11:11:41.279508 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbd9dce7-cf00-4b71-855b-f8d1fbe41736" path="/var/lib/kubelet/pods/cbd9dce7-cf00-4b71-855b-f8d1fbe41736/volumes" Jan 26 11:11:41 crc kubenswrapper[4619]: I0126 11:11:41.279906 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-j8vh8"] Jan 26 11:11:41 crc kubenswrapper[4619]: I0126 11:11:41.369608 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-j8vh8\" (UID: \"9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb\") " pod="openstack/dnsmasq-dns-698758b865-j8vh8" Jan 26 11:11:41 crc kubenswrapper[4619]: I0126 11:11:41.370049 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-j8vh8\" (UID: \"9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb\") " pod="openstack/dnsmasq-dns-698758b865-j8vh8" Jan 26 11:11:41 crc kubenswrapper[4619]: I0126 11:11:41.370078 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb-dns-svc\") pod \"dnsmasq-dns-698758b865-j8vh8\" (UID: \"9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb\") " pod="openstack/dnsmasq-dns-698758b865-j8vh8" Jan 26 11:11:41 crc kubenswrapper[4619]: I0126 11:11:41.370126 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb-config\") pod \"dnsmasq-dns-698758b865-j8vh8\" (UID: \"9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb\") " pod="openstack/dnsmasq-dns-698758b865-j8vh8" Jan 26 11:11:41 crc kubenswrapper[4619]: I0126 11:11:41.370147 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skjbv\" (UniqueName: \"kubernetes.io/projected/9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb-kube-api-access-skjbv\") pod \"dnsmasq-dns-698758b865-j8vh8\" (UID: \"9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb\") " pod="openstack/dnsmasq-dns-698758b865-j8vh8" Jan 26 11:11:41 crc kubenswrapper[4619]: I0126 11:11:41.371025 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-j8vh8\" (UID: \"9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb\") " pod="openstack/dnsmasq-dns-698758b865-j8vh8" Jan 26 11:11:41 crc kubenswrapper[4619]: I0126 11:11:41.373366 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-j8vh8\" (UID: \"9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb\") " pod="openstack/dnsmasq-dns-698758b865-j8vh8" Jan 26 11:11:41 crc kubenswrapper[4619]: I0126 11:11:41.374459 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb-config\") pod \"dnsmasq-dns-698758b865-j8vh8\" (UID: \"9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb\") " pod="openstack/dnsmasq-dns-698758b865-j8vh8" Jan 26 11:11:41 crc kubenswrapper[4619]: I0126 11:11:41.384199 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb-dns-svc\") pod \"dnsmasq-dns-698758b865-j8vh8\" (UID: \"9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb\") " pod="openstack/dnsmasq-dns-698758b865-j8vh8" Jan 26 11:11:41 crc kubenswrapper[4619]: I0126 11:11:41.431505 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skjbv\" (UniqueName: \"kubernetes.io/projected/9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb-kube-api-access-skjbv\") pod \"dnsmasq-dns-698758b865-j8vh8\" (UID: \"9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb\") " pod="openstack/dnsmasq-dns-698758b865-j8vh8" Jan 26 11:11:41 crc kubenswrapper[4619]: I0126 11:11:41.566555 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-j8vh8" Jan 26 11:11:41 crc kubenswrapper[4619]: I0126 11:11:41.679572 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-gbr5p" event={"ID":"2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc","Type":"ContainerStarted","Data":"a579f4f6129dc4ec5cdbfd3afc871cec57594f06e7966d4839ffee5e8a0f4905"} Jan 26 11:11:41 crc kubenswrapper[4619]: I0126 11:11:41.777547 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-c2zcw"] Jan 26 11:11:41 crc kubenswrapper[4619]: I0126 11:11:41.854980 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-2xlg4"] Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.429005 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.462062 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.464673 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.472545 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.472804 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.472933 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.473081 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-twdnw" Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.607053 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-j8vh8"] Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.607732 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e50c002e-11c3-4dc8-b32b-c962da06aecb-etc-swift\") pod \"swift-storage-0\" (UID: \"e50c002e-11c3-4dc8-b32b-c962da06aecb\") " pod="openstack/swift-storage-0" Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.607878 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e50c002e-11c3-4dc8-b32b-c962da06aecb-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e50c002e-11c3-4dc8-b32b-c962da06aecb\") " pod="openstack/swift-storage-0" Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.607935 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzrpn\" (UniqueName: \"kubernetes.io/projected/e50c002e-11c3-4dc8-b32b-c962da06aecb-kube-api-access-jzrpn\") pod \"swift-storage-0\" (UID: \"e50c002e-11c3-4dc8-b32b-c962da06aecb\") " pod="openstack/swift-storage-0" Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.607985 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"e50c002e-11c3-4dc8-b32b-c962da06aecb\") " pod="openstack/swift-storage-0" Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.608044 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e50c002e-11c3-4dc8-b32b-c962da06aecb-cache\") pod \"swift-storage-0\" (UID: \"e50c002e-11c3-4dc8-b32b-c962da06aecb\") " pod="openstack/swift-storage-0" Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.608084 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e50c002e-11c3-4dc8-b32b-c962da06aecb-lock\") pod \"swift-storage-0\" (UID: \"e50c002e-11c3-4dc8-b32b-c962da06aecb\") " pod="openstack/swift-storage-0" Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.710421 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e50c002e-11c3-4dc8-b32b-c962da06aecb-etc-swift\") pod \"swift-storage-0\" (UID: \"e50c002e-11c3-4dc8-b32b-c962da06aecb\") " pod="openstack/swift-storage-0" Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.710816 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e50c002e-11c3-4dc8-b32b-c962da06aecb-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e50c002e-11c3-4dc8-b32b-c962da06aecb\") " pod="openstack/swift-storage-0" Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.710857 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzrpn\" (UniqueName: \"kubernetes.io/projected/e50c002e-11c3-4dc8-b32b-c962da06aecb-kube-api-access-jzrpn\") pod \"swift-storage-0\" (UID: \"e50c002e-11c3-4dc8-b32b-c962da06aecb\") " pod="openstack/swift-storage-0" Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.710881 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"e50c002e-11c3-4dc8-b32b-c962da06aecb\") " pod="openstack/swift-storage-0" Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.710916 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e50c002e-11c3-4dc8-b32b-c962da06aecb-cache\") pod \"swift-storage-0\" (UID: \"e50c002e-11c3-4dc8-b32b-c962da06aecb\") " pod="openstack/swift-storage-0" Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.710944 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e50c002e-11c3-4dc8-b32b-c962da06aecb-lock\") pod \"swift-storage-0\" (UID: \"e50c002e-11c3-4dc8-b32b-c962da06aecb\") " pod="openstack/swift-storage-0" Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.711334 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e50c002e-11c3-4dc8-b32b-c962da06aecb-lock\") pod \"swift-storage-0\" (UID: \"e50c002e-11c3-4dc8-b32b-c962da06aecb\") " pod="openstack/swift-storage-0" Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.712003 4619 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"e50c002e-11c3-4dc8-b32b-c962da06aecb\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/swift-storage-0" Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.712038 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-j8vh8" event={"ID":"9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb","Type":"ContainerStarted","Data":"f0c1b32ee6acc5f1149905b458b1988bbd742f8facc66cbb8452e0b2d46121df"} Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.712305 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e50c002e-11c3-4dc8-b32b-c962da06aecb-cache\") pod \"swift-storage-0\" (UID: \"e50c002e-11c3-4dc8-b32b-c962da06aecb\") " pod="openstack/swift-storage-0" Jan 26 11:11:42 crc kubenswrapper[4619]: E0126 11:11:42.712385 4619 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 11:11:42 crc kubenswrapper[4619]: E0126 11:11:42.712496 4619 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 11:11:42 crc kubenswrapper[4619]: E0126 11:11:42.712718 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e50c002e-11c3-4dc8-b32b-c962da06aecb-etc-swift podName:e50c002e-11c3-4dc8-b32b-c962da06aecb nodeName:}" failed. No retries permitted until 2026-01-26 11:11:43.212696481 +0000 UTC m=+1002.246737197 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e50c002e-11c3-4dc8-b32b-c962da06aecb-etc-swift") pod "swift-storage-0" (UID: "e50c002e-11c3-4dc8-b32b-c962da06aecb") : configmap "swift-ring-files" not found Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.721956 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e50c002e-11c3-4dc8-b32b-c962da06aecb-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e50c002e-11c3-4dc8-b32b-c962da06aecb\") " pod="openstack/swift-storage-0" Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.729081 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d4e064ab-47fc-497d-b783-9debc84b2c7a","Type":"ContainerStarted","Data":"aa424e94465b032123b894835ea42cd277602fea5bb6170cdb822850e8689f88"} Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.729820 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzrpn\" (UniqueName: \"kubernetes.io/projected/e50c002e-11c3-4dc8-b32b-c962da06aecb-kube-api-access-jzrpn\") pod \"swift-storage-0\" (UID: \"e50c002e-11c3-4dc8-b32b-c962da06aecb\") " pod="openstack/swift-storage-0" Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.729974 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.741712 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"eaabd9be-2386-41dc-88ef-944ee93da789","Type":"ContainerStarted","Data":"19b09078037705b86290dbd7edbdd60c8396930c175d5402222218bb0367e97c"} Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.755666 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"e50c002e-11c3-4dc8-b32b-c962da06aecb\") " pod="openstack/swift-storage-0" Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.760423 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=24.811422425 podStartE2EDuration="32.760401589s" podCreationTimestamp="2026-01-26 11:11:10 +0000 UTC" firstStartedPulling="2026-01-26 11:11:34.179670992 +0000 UTC m=+993.213711708" lastFinishedPulling="2026-01-26 11:11:42.128650156 +0000 UTC m=+1001.162690872" observedRunningTime="2026-01-26 11:11:42.74929026 +0000 UTC m=+1001.783330976" watchObservedRunningTime="2026-01-26 11:11:42.760401589 +0000 UTC m=+1001.794442305" Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.763006 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"cc19a957-aa75-443a-bd3a-2696241ffbd1","Type":"ContainerStarted","Data":"fd3de7dc8f067cd2b1674b0477f5ad0d4d2119c1676f691bc1c9ed67117d3710"} Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.794393 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8f5811c2-1a5b-4fc0-aa98-a6604f266891","Type":"ContainerStarted","Data":"3e4dae50aef9fe418631ebddea8431275c95a7df7cca5a8decb4ac390517b52c"} Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.806596 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-c2zcw" event={"ID":"664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3","Type":"ContainerStarted","Data":"16279edff0ad987d46c03e8ed1df6b03fe0e69391af9ebf02a0474f967cd5cf3"} Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.822686 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=11.140460188 podStartE2EDuration="36.822670773s" podCreationTimestamp="2026-01-26 11:11:06 +0000 UTC" firstStartedPulling="2026-01-26 11:11:08.552046616 +0000 UTC m=+967.586087332" lastFinishedPulling="2026-01-26 11:11:34.234257201 +0000 UTC m=+993.268297917" observedRunningTime="2026-01-26 11:11:42.813307242 +0000 UTC m=+1001.847347958" watchObservedRunningTime="2026-01-26 11:11:42.822670773 +0000 UTC m=+1001.856711489" Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.828985 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"675ad44b-ca9d-4f4c-947b-06184a5db736","Type":"ContainerStarted","Data":"b46d2b1452a7aeeeec5463df9afa6581ca2b49d54e0b322e6778bfd3c4a978c0"} Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.841930 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-2xlg4" event={"ID":"b44829fb-057c-442c-9319-4b37d7ccf1b0","Type":"ContainerStarted","Data":"05ff80adf87f41e478b5638160b379a87f8f999d01f779303ac766369d80cf1c"} Jan 26 11:11:42 crc kubenswrapper[4619]: I0126 11:11:42.855499 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=11.477030445 podStartE2EDuration="35.855483576s" podCreationTimestamp="2026-01-26 11:11:07 +0000 UTC" firstStartedPulling="2026-01-26 11:11:09.788694652 +0000 UTC m=+968.822735368" lastFinishedPulling="2026-01-26 11:11:34.167147783 +0000 UTC m=+993.201188499" observedRunningTime="2026-01-26 11:11:42.853883961 +0000 UTC m=+1001.887924677" watchObservedRunningTime="2026-01-26 11:11:42.855483576 +0000 UTC m=+1001.889524292" Jan 26 11:11:43 crc kubenswrapper[4619]: I0126 11:11:43.218500 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e50c002e-11c3-4dc8-b32b-c962da06aecb-etc-swift\") pod \"swift-storage-0\" (UID: \"e50c002e-11c3-4dc8-b32b-c962da06aecb\") " pod="openstack/swift-storage-0" Jan 26 11:11:43 crc kubenswrapper[4619]: E0126 11:11:43.218830 4619 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 11:11:43 crc kubenswrapper[4619]: E0126 11:11:43.218875 4619 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 11:11:43 crc kubenswrapper[4619]: E0126 11:11:43.218940 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e50c002e-11c3-4dc8-b32b-c962da06aecb-etc-swift podName:e50c002e-11c3-4dc8-b32b-c962da06aecb nodeName:}" failed. No retries permitted until 2026-01-26 11:11:44.218917792 +0000 UTC m=+1003.252958508 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e50c002e-11c3-4dc8-b32b-c962da06aecb-etc-swift") pod "swift-storage-0" (UID: "e50c002e-11c3-4dc8-b32b-c962da06aecb") : configmap "swift-ring-files" not found Jan 26 11:11:43 crc kubenswrapper[4619]: I0126 11:11:43.859309 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-djjzm" event={"ID":"b814fe04-5ad5-4a1f-b49b-9f38ea5be2da","Type":"ContainerStarted","Data":"31822d10b974b21f1075b7bfbab2bb86b8327792beb23c9af37b50ba8788e50c"} Jan 26 11:11:43 crc kubenswrapper[4619]: I0126 11:11:43.859680 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-djjzm" Jan 26 11:11:43 crc kubenswrapper[4619]: I0126 11:11:43.863160 4619 generic.go:334] "Generic (PLEG): container finished" podID="1778a60a-b3d9-4f16-a8d4-8c0adf54524f" containerID="29ec29e6a016ac82d6185fa8b3533bddd06b0aaab280bd0fc230fa57cd6fa358" exitCode=0 Jan 26 11:11:43 crc kubenswrapper[4619]: I0126 11:11:43.863302 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sq2gq" event={"ID":"1778a60a-b3d9-4f16-a8d4-8c0adf54524f","Type":"ContainerDied","Data":"29ec29e6a016ac82d6185fa8b3533bddd06b0aaab280bd0fc230fa57cd6fa358"} Jan 26 11:11:43 crc kubenswrapper[4619]: I0126 11:11:43.869295 4619 generic.go:334] "Generic (PLEG): container finished" podID="b44829fb-057c-442c-9319-4b37d7ccf1b0" containerID="d481fc197146b9edb1606b2a6b1e4c70db4207074d0b37fa318dcfffda237936" exitCode=0 Jan 26 11:11:43 crc kubenswrapper[4619]: I0126 11:11:43.869493 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-2xlg4" event={"ID":"b44829fb-057c-442c-9319-4b37d7ccf1b0","Type":"ContainerDied","Data":"d481fc197146b9edb1606b2a6b1e4c70db4207074d0b37fa318dcfffda237936"} Jan 26 11:11:43 crc kubenswrapper[4619]: I0126 11:11:43.873155 4619 generic.go:334] "Generic (PLEG): container finished" podID="9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb" containerID="daeebf884dff31dff7ef37d24c5e0687696c867b6ab7690fa7d1ef93db7c5731" exitCode=0 Jan 26 11:11:43 crc kubenswrapper[4619]: I0126 11:11:43.873204 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-j8vh8" event={"ID":"9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb","Type":"ContainerDied","Data":"daeebf884dff31dff7ef37d24c5e0687696c867b6ab7690fa7d1ef93db7c5731"} Jan 26 11:11:43 crc kubenswrapper[4619]: I0126 11:11:43.879529 4619 generic.go:334] "Generic (PLEG): container finished" podID="664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3" containerID="850bb8b129048eed32faedd192446e2d53035b3ac3052a175362bfc6e2180f23" exitCode=0 Jan 26 11:11:43 crc kubenswrapper[4619]: I0126 11:11:43.879744 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-c2zcw" event={"ID":"664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3","Type":"ContainerDied","Data":"850bb8b129048eed32faedd192446e2d53035b3ac3052a175362bfc6e2180f23"} Jan 26 11:11:43 crc kubenswrapper[4619]: I0126 11:11:43.882823 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-djjzm" podStartSLOduration=22.800241372 podStartE2EDuration="29.88280886s" podCreationTimestamp="2026-01-26 11:11:14 +0000 UTC" firstStartedPulling="2026-01-26 11:11:33.364910835 +0000 UTC m=+992.398951561" lastFinishedPulling="2026-01-26 11:11:40.447478333 +0000 UTC m=+999.481519049" observedRunningTime="2026-01-26 11:11:43.875106265 +0000 UTC m=+1002.909147001" watchObservedRunningTime="2026-01-26 11:11:43.88280886 +0000 UTC m=+1002.916849576" Jan 26 11:11:44 crc kubenswrapper[4619]: E0126 11:11:44.246056 4619 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 26 11:11:44 crc kubenswrapper[4619]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 26 11:11:44 crc kubenswrapper[4619]: > podSandboxID="16279edff0ad987d46c03e8ed1df6b03fe0e69391af9ebf02a0474f967cd5cf3" Jan 26 11:11:44 crc kubenswrapper[4619]: E0126 11:11:44.246901 4619 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 26 11:11:44 crc kubenswrapper[4619]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n599h5cbh7ch5d4h66fh676hdbh546h95h88h5ffh55ch7fhch57ch687hddhc7h5fdh57dh674h56fh64ch98h9bh557h55dh646h54ch54fh5c4h597q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bckzj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-86db49b7ff-c2zcw_openstack(664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 26 11:11:44 crc kubenswrapper[4619]: > logger="UnhandledError" Jan 26 11:11:44 crc kubenswrapper[4619]: E0126 11:11:44.248241 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-86db49b7ff-c2zcw" podUID="664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3" Jan 26 11:11:44 crc kubenswrapper[4619]: I0126 11:11:44.253960 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e50c002e-11c3-4dc8-b32b-c962da06aecb-etc-swift\") pod \"swift-storage-0\" (UID: \"e50c002e-11c3-4dc8-b32b-c962da06aecb\") " pod="openstack/swift-storage-0" Jan 26 11:11:44 crc kubenswrapper[4619]: E0126 11:11:44.254126 4619 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 11:11:44 crc kubenswrapper[4619]: E0126 11:11:44.254156 4619 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 11:11:44 crc kubenswrapper[4619]: E0126 11:11:44.254200 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e50c002e-11c3-4dc8-b32b-c962da06aecb-etc-swift podName:e50c002e-11c3-4dc8-b32b-c962da06aecb nodeName:}" failed. No retries permitted until 2026-01-26 11:11:46.254186716 +0000 UTC m=+1005.288227432 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e50c002e-11c3-4dc8-b32b-c962da06aecb-etc-swift") pod "swift-storage-0" (UID: "e50c002e-11c3-4dc8-b32b-c962da06aecb") : configmap "swift-ring-files" not found Jan 26 11:11:44 crc kubenswrapper[4619]: I0126 11:11:44.290239 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-2xlg4" Jan 26 11:11:44 crc kubenswrapper[4619]: I0126 11:11:44.355678 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b44829fb-057c-442c-9319-4b37d7ccf1b0-config\") pod \"b44829fb-057c-442c-9319-4b37d7ccf1b0\" (UID: \"b44829fb-057c-442c-9319-4b37d7ccf1b0\") " Jan 26 11:11:44 crc kubenswrapper[4619]: I0126 11:11:44.355817 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b44829fb-057c-442c-9319-4b37d7ccf1b0-ovsdbserver-nb\") pod \"b44829fb-057c-442c-9319-4b37d7ccf1b0\" (UID: \"b44829fb-057c-442c-9319-4b37d7ccf1b0\") " Jan 26 11:11:44 crc kubenswrapper[4619]: I0126 11:11:44.355913 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg7jg\" (UniqueName: \"kubernetes.io/projected/b44829fb-057c-442c-9319-4b37d7ccf1b0-kube-api-access-mg7jg\") pod \"b44829fb-057c-442c-9319-4b37d7ccf1b0\" (UID: \"b44829fb-057c-442c-9319-4b37d7ccf1b0\") " Jan 26 11:11:44 crc kubenswrapper[4619]: I0126 11:11:44.356153 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b44829fb-057c-442c-9319-4b37d7ccf1b0-dns-svc\") pod \"b44829fb-057c-442c-9319-4b37d7ccf1b0\" (UID: \"b44829fb-057c-442c-9319-4b37d7ccf1b0\") " Jan 26 11:11:44 crc kubenswrapper[4619]: I0126 11:11:44.367175 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b44829fb-057c-442c-9319-4b37d7ccf1b0-kube-api-access-mg7jg" (OuterVolumeSpecName: "kube-api-access-mg7jg") pod "b44829fb-057c-442c-9319-4b37d7ccf1b0" (UID: "b44829fb-057c-442c-9319-4b37d7ccf1b0"). InnerVolumeSpecName "kube-api-access-mg7jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:11:44 crc kubenswrapper[4619]: I0126 11:11:44.383762 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b44829fb-057c-442c-9319-4b37d7ccf1b0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b44829fb-057c-442c-9319-4b37d7ccf1b0" (UID: "b44829fb-057c-442c-9319-4b37d7ccf1b0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:11:44 crc kubenswrapper[4619]: I0126 11:11:44.389468 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b44829fb-057c-442c-9319-4b37d7ccf1b0-config" (OuterVolumeSpecName: "config") pod "b44829fb-057c-442c-9319-4b37d7ccf1b0" (UID: "b44829fb-057c-442c-9319-4b37d7ccf1b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:11:44 crc kubenswrapper[4619]: I0126 11:11:44.402776 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b44829fb-057c-442c-9319-4b37d7ccf1b0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b44829fb-057c-442c-9319-4b37d7ccf1b0" (UID: "b44829fb-057c-442c-9319-4b37d7ccf1b0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:11:44 crc kubenswrapper[4619]: I0126 11:11:44.463365 4619 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b44829fb-057c-442c-9319-4b37d7ccf1b0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 11:11:44 crc kubenswrapper[4619]: I0126 11:11:44.463395 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b44829fb-057c-442c-9319-4b37d7ccf1b0-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:11:44 crc kubenswrapper[4619]: I0126 11:11:44.463404 4619 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b44829fb-057c-442c-9319-4b37d7ccf1b0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 11:11:44 crc kubenswrapper[4619]: I0126 11:11:44.463414 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg7jg\" (UniqueName: \"kubernetes.io/projected/b44829fb-057c-442c-9319-4b37d7ccf1b0-kube-api-access-mg7jg\") on node \"crc\" DevicePath \"\"" Jan 26 11:11:44 crc kubenswrapper[4619]: I0126 11:11:44.894675 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-j8vh8" event={"ID":"9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb","Type":"ContainerStarted","Data":"1262c33e48ef3ad40e73b2b2b91e37af1a0d765d631b0d5876d287d2c8b278c5"} Jan 26 11:11:44 crc kubenswrapper[4619]: I0126 11:11:44.895890 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-j8vh8" Jan 26 11:11:44 crc kubenswrapper[4619]: I0126 11:11:44.905804 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sq2gq" event={"ID":"1778a60a-b3d9-4f16-a8d4-8c0adf54524f","Type":"ContainerStarted","Data":"ebcc82c1197e62bcb5e7794cfc23ee728aec40a255c0c736d3dcdaf2bd92ad8a"} Jan 26 11:11:44 crc kubenswrapper[4619]: I0126 11:11:44.911592 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-2xlg4" Jan 26 11:11:44 crc kubenswrapper[4619]: I0126 11:11:44.911778 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-2xlg4" event={"ID":"b44829fb-057c-442c-9319-4b37d7ccf1b0","Type":"ContainerDied","Data":"05ff80adf87f41e478b5638160b379a87f8f999d01f779303ac766369d80cf1c"} Jan 26 11:11:44 crc kubenswrapper[4619]: I0126 11:11:44.911823 4619 scope.go:117] "RemoveContainer" containerID="d481fc197146b9edb1606b2a6b1e4c70db4207074d0b37fa318dcfffda237936" Jan 26 11:11:44 crc kubenswrapper[4619]: I0126 11:11:44.919444 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-j8vh8" podStartSLOduration=3.919425741 podStartE2EDuration="3.919425741s" podCreationTimestamp="2026-01-26 11:11:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:11:44.915294666 +0000 UTC m=+1003.949335382" watchObservedRunningTime="2026-01-26 11:11:44.919425741 +0000 UTC m=+1003.953466457" Jan 26 11:11:45 crc kubenswrapper[4619]: I0126 11:11:45.120649 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-2xlg4"] Jan 26 11:11:45 crc kubenswrapper[4619]: I0126 11:11:45.136953 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-2xlg4"] Jan 26 11:11:45 crc kubenswrapper[4619]: I0126 11:11:45.291956 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b44829fb-057c-442c-9319-4b37d7ccf1b0" path="/var/lib/kubelet/pods/b44829fb-057c-442c-9319-4b37d7ccf1b0/volumes" Jan 26 11:11:45 crc kubenswrapper[4619]: I0126 11:11:45.927465 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sq2gq" event={"ID":"1778a60a-b3d9-4f16-a8d4-8c0adf54524f","Type":"ContainerStarted","Data":"451cb73c5909bd73557c64b14349d068d1a91a846129c1c47f4269771e9bcd29"} Jan 26 11:11:45 crc kubenswrapper[4619]: I0126 11:11:45.928068 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-sq2gq" Jan 26 11:11:45 crc kubenswrapper[4619]: I0126 11:11:45.928085 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-sq2gq" Jan 26 11:11:45 crc kubenswrapper[4619]: I0126 11:11:45.932339 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-c2zcw" event={"ID":"664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3","Type":"ContainerStarted","Data":"ad70a8fddbc48a9912a6ddc16712a68f49bbfceebfea1162843627af1c012b48"} Jan 26 11:11:45 crc kubenswrapper[4619]: I0126 11:11:45.932642 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-c2zcw" Jan 26 11:11:45 crc kubenswrapper[4619]: I0126 11:11:45.952721 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-sq2gq" podStartSLOduration=27.853436316 podStartE2EDuration="31.952706601s" podCreationTimestamp="2026-01-26 11:11:14 +0000 UTC" firstStartedPulling="2026-01-26 11:11:36.34825949 +0000 UTC m=+995.382300206" lastFinishedPulling="2026-01-26 11:11:40.447529775 +0000 UTC m=+999.481570491" observedRunningTime="2026-01-26 11:11:45.949127212 +0000 UTC m=+1004.983167938" watchObservedRunningTime="2026-01-26 11:11:45.952706601 +0000 UTC m=+1004.986747317" Jan 26 11:11:45 crc kubenswrapper[4619]: I0126 11:11:45.976065 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-c2zcw" podStartSLOduration=7.423941223 podStartE2EDuration="7.97602035s" podCreationTimestamp="2026-01-26 11:11:38 +0000 UTC" firstStartedPulling="2026-01-26 11:11:42.054775459 +0000 UTC m=+1001.088816175" lastFinishedPulling="2026-01-26 11:11:42.606854586 +0000 UTC m=+1001.640895302" observedRunningTime="2026-01-26 11:11:45.973838419 +0000 UTC m=+1005.007879145" watchObservedRunningTime="2026-01-26 11:11:45.97602035 +0000 UTC m=+1005.010061066" Jan 26 11:11:46 crc kubenswrapper[4619]: I0126 11:11:46.278391 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-9swh4"] Jan 26 11:11:46 crc kubenswrapper[4619]: E0126 11:11:46.278776 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b44829fb-057c-442c-9319-4b37d7ccf1b0" containerName="init" Jan 26 11:11:46 crc kubenswrapper[4619]: I0126 11:11:46.278796 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="b44829fb-057c-442c-9319-4b37d7ccf1b0" containerName="init" Jan 26 11:11:46 crc kubenswrapper[4619]: I0126 11:11:46.278986 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="b44829fb-057c-442c-9319-4b37d7ccf1b0" containerName="init" Jan 26 11:11:46 crc kubenswrapper[4619]: I0126 11:11:46.279569 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9swh4" Jan 26 11:11:46 crc kubenswrapper[4619]: I0126 11:11:46.283515 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 26 11:11:46 crc kubenswrapper[4619]: I0126 11:11:46.285125 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 26 11:11:46 crc kubenswrapper[4619]: I0126 11:11:46.285347 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 26 11:11:46 crc kubenswrapper[4619]: I0126 11:11:46.327906 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-9swh4"] Jan 26 11:11:46 crc kubenswrapper[4619]: I0126 11:11:46.330939 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e50c002e-11c3-4dc8-b32b-c962da06aecb-etc-swift\") pod \"swift-storage-0\" (UID: \"e50c002e-11c3-4dc8-b32b-c962da06aecb\") " pod="openstack/swift-storage-0" Jan 26 11:11:46 crc kubenswrapper[4619]: E0126 11:11:46.332053 4619 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 11:11:46 crc kubenswrapper[4619]: E0126 11:11:46.332068 4619 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 11:11:46 crc kubenswrapper[4619]: E0126 11:11:46.332107 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e50c002e-11c3-4dc8-b32b-c962da06aecb-etc-swift podName:e50c002e-11c3-4dc8-b32b-c962da06aecb nodeName:}" failed. No retries permitted until 2026-01-26 11:11:50.33209167 +0000 UTC m=+1009.366132496 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e50c002e-11c3-4dc8-b32b-c962da06aecb-etc-swift") pod "swift-storage-0" (UID: "e50c002e-11c3-4dc8-b32b-c962da06aecb") : configmap "swift-ring-files" not found Jan 26 11:11:46 crc kubenswrapper[4619]: I0126 11:11:46.432317 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-swiftconf\") pod \"swift-ring-rebalance-9swh4\" (UID: \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\") " pod="openstack/swift-ring-rebalance-9swh4" Jan 26 11:11:46 crc kubenswrapper[4619]: I0126 11:11:46.432364 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-ring-data-devices\") pod \"swift-ring-rebalance-9swh4\" (UID: \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\") " pod="openstack/swift-ring-rebalance-9swh4" Jan 26 11:11:46 crc kubenswrapper[4619]: I0126 11:11:46.432390 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-dispersionconf\") pod \"swift-ring-rebalance-9swh4\" (UID: \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\") " pod="openstack/swift-ring-rebalance-9swh4" Jan 26 11:11:46 crc kubenswrapper[4619]: I0126 11:11:46.432425 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-etc-swift\") pod \"swift-ring-rebalance-9swh4\" (UID: \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\") " pod="openstack/swift-ring-rebalance-9swh4" Jan 26 11:11:46 crc kubenswrapper[4619]: I0126 11:11:46.432763 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-scripts\") pod \"swift-ring-rebalance-9swh4\" (UID: \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\") " pod="openstack/swift-ring-rebalance-9swh4" Jan 26 11:11:46 crc kubenswrapper[4619]: I0126 11:11:46.432906 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lh4d\" (UniqueName: \"kubernetes.io/projected/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-kube-api-access-2lh4d\") pod \"swift-ring-rebalance-9swh4\" (UID: \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\") " pod="openstack/swift-ring-rebalance-9swh4" Jan 26 11:11:46 crc kubenswrapper[4619]: I0126 11:11:46.433147 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-combined-ca-bundle\") pod \"swift-ring-rebalance-9swh4\" (UID: \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\") " pod="openstack/swift-ring-rebalance-9swh4" Jan 26 11:11:46 crc kubenswrapper[4619]: I0126 11:11:46.535195 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-combined-ca-bundle\") pod \"swift-ring-rebalance-9swh4\" (UID: \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\") " pod="openstack/swift-ring-rebalance-9swh4" Jan 26 11:11:46 crc kubenswrapper[4619]: I0126 11:11:46.535260 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-swiftconf\") pod \"swift-ring-rebalance-9swh4\" (UID: \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\") " pod="openstack/swift-ring-rebalance-9swh4" Jan 26 11:11:46 crc kubenswrapper[4619]: I0126 11:11:46.535284 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-ring-data-devices\") pod \"swift-ring-rebalance-9swh4\" (UID: \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\") " pod="openstack/swift-ring-rebalance-9swh4" Jan 26 11:11:46 crc kubenswrapper[4619]: I0126 11:11:46.535304 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-dispersionconf\") pod \"swift-ring-rebalance-9swh4\" (UID: \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\") " pod="openstack/swift-ring-rebalance-9swh4" Jan 26 11:11:46 crc kubenswrapper[4619]: I0126 11:11:46.535336 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-etc-swift\") pod \"swift-ring-rebalance-9swh4\" (UID: \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\") " pod="openstack/swift-ring-rebalance-9swh4" Jan 26 11:11:46 crc kubenswrapper[4619]: I0126 11:11:46.535370 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-scripts\") pod \"swift-ring-rebalance-9swh4\" (UID: \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\") " pod="openstack/swift-ring-rebalance-9swh4" Jan 26 11:11:46 crc kubenswrapper[4619]: I0126 11:11:46.535395 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lh4d\" (UniqueName: \"kubernetes.io/projected/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-kube-api-access-2lh4d\") pod \"swift-ring-rebalance-9swh4\" (UID: \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\") " pod="openstack/swift-ring-rebalance-9swh4" Jan 26 11:11:46 crc kubenswrapper[4619]: I0126 11:11:46.543379 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-ring-data-devices\") pod \"swift-ring-rebalance-9swh4\" (UID: \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\") " pod="openstack/swift-ring-rebalance-9swh4" Jan 26 11:11:46 crc kubenswrapper[4619]: I0126 11:11:46.543716 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-etc-swift\") pod \"swift-ring-rebalance-9swh4\" (UID: \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\") " pod="openstack/swift-ring-rebalance-9swh4" Jan 26 11:11:46 crc kubenswrapper[4619]: I0126 11:11:46.543839 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-combined-ca-bundle\") pod \"swift-ring-rebalance-9swh4\" (UID: \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\") " pod="openstack/swift-ring-rebalance-9swh4" Jan 26 11:11:46 crc kubenswrapper[4619]: I0126 11:11:46.544465 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-dispersionconf\") pod \"swift-ring-rebalance-9swh4\" (UID: \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\") " pod="openstack/swift-ring-rebalance-9swh4" Jan 26 11:11:46 crc kubenswrapper[4619]: I0126 11:11:46.544469 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-scripts\") pod \"swift-ring-rebalance-9swh4\" (UID: \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\") " pod="openstack/swift-ring-rebalance-9swh4" Jan 26 11:11:46 crc kubenswrapper[4619]: I0126 11:11:46.544843 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-swiftconf\") pod \"swift-ring-rebalance-9swh4\" (UID: \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\") " pod="openstack/swift-ring-rebalance-9swh4" Jan 26 11:11:46 crc kubenswrapper[4619]: I0126 11:11:46.560753 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lh4d\" (UniqueName: \"kubernetes.io/projected/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-kube-api-access-2lh4d\") pod \"swift-ring-rebalance-9swh4\" (UID: \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\") " pod="openstack/swift-ring-rebalance-9swh4" Jan 26 11:11:46 crc kubenswrapper[4619]: I0126 11:11:46.596599 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9swh4" Jan 26 11:11:47 crc kubenswrapper[4619]: E0126 11:11:47.177409 4619 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.69:39794->38.102.83.69:40211: write tcp 38.102.83.69:39794->38.102.83.69:40211: write: broken pipe Jan 26 11:11:47 crc kubenswrapper[4619]: I0126 11:11:47.676192 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 26 11:11:47 crc kubenswrapper[4619]: I0126 11:11:47.676231 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 26 11:11:48 crc kubenswrapper[4619]: I0126 11:11:48.642378 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-9swh4"] Jan 26 11:11:48 crc kubenswrapper[4619]: I0126 11:11:48.958286 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-gbr5p" event={"ID":"2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc","Type":"ContainerStarted","Data":"b95137c75ab1cc0d70b468e1ee5c87f889a85856bb6edf40b4975c1e793860bc"} Jan 26 11:11:48 crc kubenswrapper[4619]: I0126 11:11:48.961488 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"cc19a957-aa75-443a-bd3a-2696241ffbd1","Type":"ContainerStarted","Data":"95c75f3497646ef6bde9e1f63e2c4b111305a6f527cebb574ce0cca765069fde"} Jan 26 11:11:48 crc kubenswrapper[4619]: I0126 11:11:48.962443 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9swh4" event={"ID":"ae54b20c-f51c-4b68-9f71-0748e5ba0c32","Type":"ContainerStarted","Data":"40bf64d905fa59e9b4c654f9a54686e5781f344a8c56206bbffb57b339574a45"} Jan 26 11:11:48 crc kubenswrapper[4619]: I0126 11:11:48.964290 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"eaabd9be-2386-41dc-88ef-944ee93da789","Type":"ContainerStarted","Data":"456ebd857b157c167a43ed216c9aefbf2d639e13587415575dc83db37de03131"} Jan 26 11:11:48 crc kubenswrapper[4619]: I0126 11:11:48.990081 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-gbr5p" podStartSLOduration=4.55390928 podStartE2EDuration="11.990061209s" podCreationTimestamp="2026-01-26 11:11:37 +0000 UTC" firstStartedPulling="2026-01-26 11:11:40.852919449 +0000 UTC m=+999.886960165" lastFinishedPulling="2026-01-26 11:11:48.289071378 +0000 UTC m=+1007.323112094" observedRunningTime="2026-01-26 11:11:48.978298851 +0000 UTC m=+1008.012339567" watchObservedRunningTime="2026-01-26 11:11:48.990061209 +0000 UTC m=+1008.024101925" Jan 26 11:11:49 crc kubenswrapper[4619]: I0126 11:11:49.042813 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=22.391215405 podStartE2EDuration="35.042794496s" podCreationTimestamp="2026-01-26 11:11:14 +0000 UTC" firstStartedPulling="2026-01-26 11:11:35.671564125 +0000 UTC m=+994.705604841" lastFinishedPulling="2026-01-26 11:11:48.323143226 +0000 UTC m=+1007.357183932" observedRunningTime="2026-01-26 11:11:49.012944526 +0000 UTC m=+1008.046985242" watchObservedRunningTime="2026-01-26 11:11:49.042794496 +0000 UTC m=+1008.076835222" Jan 26 11:11:49 crc kubenswrapper[4619]: I0126 11:11:49.044544 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=20.982613536 podStartE2EDuration="33.044535395s" podCreationTimestamp="2026-01-26 11:11:16 +0000 UTC" firstStartedPulling="2026-01-26 11:11:36.332901023 +0000 UTC m=+995.366941739" lastFinishedPulling="2026-01-26 11:11:48.394822882 +0000 UTC m=+1007.428863598" observedRunningTime="2026-01-26 11:11:49.040817562 +0000 UTC m=+1008.074858298" watchObservedRunningTime="2026-01-26 11:11:49.044535395 +0000 UTC m=+1008.078576121" Jan 26 11:11:49 crc kubenswrapper[4619]: I0126 11:11:49.050412 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 26 11:11:49 crc kubenswrapper[4619]: I0126 11:11:49.050464 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 26 11:11:49 crc kubenswrapper[4619]: I0126 11:11:49.200360 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 26 11:11:49 crc kubenswrapper[4619]: I0126 11:11:49.820256 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 26 11:11:50 crc kubenswrapper[4619]: I0126 11:11:50.008503 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 26 11:11:50 crc kubenswrapper[4619]: I0126 11:11:50.217574 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 26 11:11:50 crc kubenswrapper[4619]: I0126 11:11:50.408047 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e50c002e-11c3-4dc8-b32b-c962da06aecb-etc-swift\") pod \"swift-storage-0\" (UID: \"e50c002e-11c3-4dc8-b32b-c962da06aecb\") " pod="openstack/swift-storage-0" Jan 26 11:11:50 crc kubenswrapper[4619]: E0126 11:11:50.408266 4619 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 11:11:50 crc kubenswrapper[4619]: E0126 11:11:50.408307 4619 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 11:11:50 crc kubenswrapper[4619]: E0126 11:11:50.408362 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e50c002e-11c3-4dc8-b32b-c962da06aecb-etc-swift podName:e50c002e-11c3-4dc8-b32b-c962da06aecb nodeName:}" failed. No retries permitted until 2026-01-26 11:11:58.408343164 +0000 UTC m=+1017.442383880 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e50c002e-11c3-4dc8-b32b-c962da06aecb-etc-swift") pod "swift-storage-0" (UID: "e50c002e-11c3-4dc8-b32b-c962da06aecb") : configmap "swift-ring-files" not found Jan 26 11:11:50 crc kubenswrapper[4619]: I0126 11:11:50.856806 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 26 11:11:50 crc kubenswrapper[4619]: I0126 11:11:50.902468 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 26 11:11:51 crc kubenswrapper[4619]: I0126 11:11:51.088406 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 26 11:11:51 crc kubenswrapper[4619]: I0126 11:11:51.141323 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 26 11:11:51 crc kubenswrapper[4619]: I0126 11:11:51.570830 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-j8vh8" Jan 26 11:11:51 crc kubenswrapper[4619]: I0126 11:11:51.665008 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-c2zcw"] Jan 26 11:11:51 crc kubenswrapper[4619]: I0126 11:11:51.665449 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-c2zcw" podUID="664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3" containerName="dnsmasq-dns" containerID="cri-o://ad70a8fddbc48a9912a6ddc16712a68f49bbfceebfea1162843627af1c012b48" gracePeriod=10 Jan 26 11:11:51 crc kubenswrapper[4619]: I0126 11:11:51.670913 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-c2zcw" Jan 26 11:11:51 crc kubenswrapper[4619]: I0126 11:11:51.902374 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 26 11:11:51 crc kubenswrapper[4619]: I0126 11:11:51.947470 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 26 11:11:51 crc kubenswrapper[4619]: I0126 11:11:51.986572 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.024288 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.030415 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.296190 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.297402 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.299443 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-bqxgp" Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.300999 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.301052 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.301360 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.329579 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.354927 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w74bz\" (UniqueName: \"kubernetes.io/projected/189b0401-ae3e-44f3-bdcc-9991a88716e8-kube-api-access-w74bz\") pod \"ovn-northd-0\" (UID: \"189b0401-ae3e-44f3-bdcc-9991a88716e8\") " pod="openstack/ovn-northd-0" Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.354992 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189b0401-ae3e-44f3-bdcc-9991a88716e8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"189b0401-ae3e-44f3-bdcc-9991a88716e8\") " pod="openstack/ovn-northd-0" Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.355038 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/189b0401-ae3e-44f3-bdcc-9991a88716e8-scripts\") pod \"ovn-northd-0\" (UID: \"189b0401-ae3e-44f3-bdcc-9991a88716e8\") " pod="openstack/ovn-northd-0" Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.355066 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/189b0401-ae3e-44f3-bdcc-9991a88716e8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"189b0401-ae3e-44f3-bdcc-9991a88716e8\") " pod="openstack/ovn-northd-0" Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.355088 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/189b0401-ae3e-44f3-bdcc-9991a88716e8-config\") pod \"ovn-northd-0\" (UID: \"189b0401-ae3e-44f3-bdcc-9991a88716e8\") " pod="openstack/ovn-northd-0" Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.355113 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/189b0401-ae3e-44f3-bdcc-9991a88716e8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"189b0401-ae3e-44f3-bdcc-9991a88716e8\") " pod="openstack/ovn-northd-0" Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.355137 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/189b0401-ae3e-44f3-bdcc-9991a88716e8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"189b0401-ae3e-44f3-bdcc-9991a88716e8\") " pod="openstack/ovn-northd-0" Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.456056 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/189b0401-ae3e-44f3-bdcc-9991a88716e8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"189b0401-ae3e-44f3-bdcc-9991a88716e8\") " pod="openstack/ovn-northd-0" Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.456116 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/189b0401-ae3e-44f3-bdcc-9991a88716e8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"189b0401-ae3e-44f3-bdcc-9991a88716e8\") " pod="openstack/ovn-northd-0" Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.456222 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w74bz\" (UniqueName: \"kubernetes.io/projected/189b0401-ae3e-44f3-bdcc-9991a88716e8-kube-api-access-w74bz\") pod \"ovn-northd-0\" (UID: \"189b0401-ae3e-44f3-bdcc-9991a88716e8\") " pod="openstack/ovn-northd-0" Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.456254 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189b0401-ae3e-44f3-bdcc-9991a88716e8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"189b0401-ae3e-44f3-bdcc-9991a88716e8\") " pod="openstack/ovn-northd-0" Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.456287 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/189b0401-ae3e-44f3-bdcc-9991a88716e8-scripts\") pod \"ovn-northd-0\" (UID: \"189b0401-ae3e-44f3-bdcc-9991a88716e8\") " pod="openstack/ovn-northd-0" Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.456308 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/189b0401-ae3e-44f3-bdcc-9991a88716e8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"189b0401-ae3e-44f3-bdcc-9991a88716e8\") " pod="openstack/ovn-northd-0" Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.456326 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/189b0401-ae3e-44f3-bdcc-9991a88716e8-config\") pod \"ovn-northd-0\" (UID: \"189b0401-ae3e-44f3-bdcc-9991a88716e8\") " pod="openstack/ovn-northd-0" Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.456548 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/189b0401-ae3e-44f3-bdcc-9991a88716e8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"189b0401-ae3e-44f3-bdcc-9991a88716e8\") " pod="openstack/ovn-northd-0" Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.457239 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/189b0401-ae3e-44f3-bdcc-9991a88716e8-config\") pod \"ovn-northd-0\" (UID: \"189b0401-ae3e-44f3-bdcc-9991a88716e8\") " pod="openstack/ovn-northd-0" Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.457254 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/189b0401-ae3e-44f3-bdcc-9991a88716e8-scripts\") pod \"ovn-northd-0\" (UID: \"189b0401-ae3e-44f3-bdcc-9991a88716e8\") " pod="openstack/ovn-northd-0" Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.463000 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/189b0401-ae3e-44f3-bdcc-9991a88716e8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"189b0401-ae3e-44f3-bdcc-9991a88716e8\") " pod="openstack/ovn-northd-0" Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.463443 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/189b0401-ae3e-44f3-bdcc-9991a88716e8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"189b0401-ae3e-44f3-bdcc-9991a88716e8\") " pod="openstack/ovn-northd-0" Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.472416 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189b0401-ae3e-44f3-bdcc-9991a88716e8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"189b0401-ae3e-44f3-bdcc-9991a88716e8\") " pod="openstack/ovn-northd-0" Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.485227 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w74bz\" (UniqueName: \"kubernetes.io/projected/189b0401-ae3e-44f3-bdcc-9991a88716e8-kube-api-access-w74bz\") pod \"ovn-northd-0\" (UID: \"189b0401-ae3e-44f3-bdcc-9991a88716e8\") " pod="openstack/ovn-northd-0" Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.614212 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.995714 4619 generic.go:334] "Generic (PLEG): container finished" podID="664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3" containerID="ad70a8fddbc48a9912a6ddc16712a68f49bbfceebfea1162843627af1c012b48" exitCode=0 Jan 26 11:11:52 crc kubenswrapper[4619]: I0126 11:11:52.995788 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-c2zcw" event={"ID":"664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3","Type":"ContainerDied","Data":"ad70a8fddbc48a9912a6ddc16712a68f49bbfceebfea1162843627af1c012b48"} Jan 26 11:11:54 crc kubenswrapper[4619]: I0126 11:11:54.077336 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-c2zcw" Jan 26 11:11:54 crc kubenswrapper[4619]: I0126 11:11:54.184359 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3-ovsdbserver-sb\") pod \"664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3\" (UID: \"664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3\") " Jan 26 11:11:54 crc kubenswrapper[4619]: I0126 11:11:54.184407 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3-config\") pod \"664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3\" (UID: \"664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3\") " Jan 26 11:11:54 crc kubenswrapper[4619]: I0126 11:11:54.184488 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3-ovsdbserver-nb\") pod \"664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3\" (UID: \"664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3\") " Jan 26 11:11:54 crc kubenswrapper[4619]: I0126 11:11:54.184529 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bckzj\" (UniqueName: \"kubernetes.io/projected/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3-kube-api-access-bckzj\") pod \"664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3\" (UID: \"664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3\") " Jan 26 11:11:54 crc kubenswrapper[4619]: I0126 11:11:54.184599 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3-dns-svc\") pod \"664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3\" (UID: \"664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3\") " Jan 26 11:11:54 crc kubenswrapper[4619]: I0126 11:11:54.189717 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3-kube-api-access-bckzj" (OuterVolumeSpecName: "kube-api-access-bckzj") pod "664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3" (UID: "664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3"). InnerVolumeSpecName "kube-api-access-bckzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:11:54 crc kubenswrapper[4619]: I0126 11:11:54.226448 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3" (UID: "664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:11:54 crc kubenswrapper[4619]: I0126 11:11:54.236420 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3" (UID: "664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:11:54 crc kubenswrapper[4619]: I0126 11:11:54.242967 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 11:11:54 crc kubenswrapper[4619]: I0126 11:11:54.254107 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3" (UID: "664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:11:54 crc kubenswrapper[4619]: I0126 11:11:54.264917 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3-config" (OuterVolumeSpecName: "config") pod "664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3" (UID: "664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:11:54 crc kubenswrapper[4619]: I0126 11:11:54.286347 4619 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 11:11:54 crc kubenswrapper[4619]: I0126 11:11:54.286375 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:11:54 crc kubenswrapper[4619]: I0126 11:11:54.286389 4619 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 11:11:54 crc kubenswrapper[4619]: I0126 11:11:54.286399 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bckzj\" (UniqueName: \"kubernetes.io/projected/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3-kube-api-access-bckzj\") on node \"crc\" DevicePath \"\"" Jan 26 11:11:54 crc kubenswrapper[4619]: I0126 11:11:54.286409 4619 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 11:11:55 crc kubenswrapper[4619]: I0126 11:11:55.011023 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-c2zcw" event={"ID":"664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3","Type":"ContainerDied","Data":"16279edff0ad987d46c03e8ed1df6b03fe0e69391af9ebf02a0474f967cd5cf3"} Jan 26 11:11:55 crc kubenswrapper[4619]: I0126 11:11:55.011369 4619 scope.go:117] "RemoveContainer" containerID="ad70a8fddbc48a9912a6ddc16712a68f49bbfceebfea1162843627af1c012b48" Jan 26 11:11:55 crc kubenswrapper[4619]: I0126 11:11:55.011528 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-c2zcw" Jan 26 11:11:55 crc kubenswrapper[4619]: I0126 11:11:55.016482 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9swh4" event={"ID":"ae54b20c-f51c-4b68-9f71-0748e5ba0c32","Type":"ContainerStarted","Data":"3a2c8140f98185a65f6ce2ebeada7ade63d451ca0877a1340072309817178a75"} Jan 26 11:11:55 crc kubenswrapper[4619]: I0126 11:11:55.018557 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"189b0401-ae3e-44f3-bdcc-9991a88716e8","Type":"ContainerStarted","Data":"1f38058c575128eeda391ec32159913a7a629db2a524c6459c932272fde2098a"} Jan 26 11:11:55 crc kubenswrapper[4619]: I0126 11:11:55.042397 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-9swh4" podStartSLOduration=3.939562536 podStartE2EDuration="9.042381913s" podCreationTimestamp="2026-01-26 11:11:46 +0000 UTC" firstStartedPulling="2026-01-26 11:11:48.652962466 +0000 UTC m=+1007.687003182" lastFinishedPulling="2026-01-26 11:11:53.755781843 +0000 UTC m=+1012.789822559" observedRunningTime="2026-01-26 11:11:55.039807301 +0000 UTC m=+1014.073848017" watchObservedRunningTime="2026-01-26 11:11:55.042381913 +0000 UTC m=+1014.076422629" Jan 26 11:11:55 crc kubenswrapper[4619]: I0126 11:11:55.044183 4619 scope.go:117] "RemoveContainer" containerID="850bb8b129048eed32faedd192446e2d53035b3ac3052a175362bfc6e2180f23" Jan 26 11:11:55 crc kubenswrapper[4619]: I0126 11:11:55.072298 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-c2zcw"] Jan 26 11:11:55 crc kubenswrapper[4619]: I0126 11:11:55.078672 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-c2zcw"] Jan 26 11:11:55 crc kubenswrapper[4619]: I0126 11:11:55.271480 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3" path="/var/lib/kubelet/pods/664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3/volumes" Jan 26 11:11:56 crc kubenswrapper[4619]: I0126 11:11:56.028221 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"189b0401-ae3e-44f3-bdcc-9991a88716e8","Type":"ContainerStarted","Data":"f6e09334d01d332f83afd22d3c85c100c5bb5103da9fcd9a44c7f648ac74b7d3"} Jan 26 11:11:56 crc kubenswrapper[4619]: I0126 11:11:56.028496 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"189b0401-ae3e-44f3-bdcc-9991a88716e8","Type":"ContainerStarted","Data":"3b052ef782095fc6c342256d72a42c0801db5f2b7aa79df01d9fb69e420d45e2"} Jan 26 11:11:56 crc kubenswrapper[4619]: I0126 11:11:56.049710 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.84570364 podStartE2EDuration="4.04969541s" podCreationTimestamp="2026-01-26 11:11:52 +0000 UTC" firstStartedPulling="2026-01-26 11:11:54.236320528 +0000 UTC m=+1013.270361234" lastFinishedPulling="2026-01-26 11:11:55.440312288 +0000 UTC m=+1014.474353004" observedRunningTime="2026-01-26 11:11:56.047743875 +0000 UTC m=+1015.081784591" watchObservedRunningTime="2026-01-26 11:11:56.04969541 +0000 UTC m=+1015.083736116" Jan 26 11:11:56 crc kubenswrapper[4619]: I0126 11:11:56.406325 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-hmlzw"] Jan 26 11:11:56 crc kubenswrapper[4619]: E0126 11:11:56.406672 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3" containerName="dnsmasq-dns" Jan 26 11:11:56 crc kubenswrapper[4619]: I0126 11:11:56.406688 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3" containerName="dnsmasq-dns" Jan 26 11:11:56 crc kubenswrapper[4619]: E0126 11:11:56.406713 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3" containerName="init" Jan 26 11:11:56 crc kubenswrapper[4619]: I0126 11:11:56.406722 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3" containerName="init" Jan 26 11:11:56 crc kubenswrapper[4619]: I0126 11:11:56.408974 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="664c4bdb-4c19-4ca7-b2b4-96a1e6fb83d3" containerName="dnsmasq-dns" Jan 26 11:11:56 crc kubenswrapper[4619]: I0126 11:11:56.409519 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hmlzw" Jan 26 11:11:56 crc kubenswrapper[4619]: I0126 11:11:56.413731 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 26 11:11:56 crc kubenswrapper[4619]: I0126 11:11:56.430761 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hmlzw"] Jan 26 11:11:56 crc kubenswrapper[4619]: I0126 11:11:56.520878 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzf82\" (UniqueName: \"kubernetes.io/projected/57e951c5-f0b1-49c3-8384-8e7a45d3273c-kube-api-access-xzf82\") pod \"root-account-create-update-hmlzw\" (UID: \"57e951c5-f0b1-49c3-8384-8e7a45d3273c\") " pod="openstack/root-account-create-update-hmlzw" Jan 26 11:11:56 crc kubenswrapper[4619]: I0126 11:11:56.520920 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57e951c5-f0b1-49c3-8384-8e7a45d3273c-operator-scripts\") pod \"root-account-create-update-hmlzw\" (UID: \"57e951c5-f0b1-49c3-8384-8e7a45d3273c\") " pod="openstack/root-account-create-update-hmlzw" Jan 26 11:11:56 crc kubenswrapper[4619]: I0126 11:11:56.622233 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzf82\" (UniqueName: \"kubernetes.io/projected/57e951c5-f0b1-49c3-8384-8e7a45d3273c-kube-api-access-xzf82\") pod \"root-account-create-update-hmlzw\" (UID: \"57e951c5-f0b1-49c3-8384-8e7a45d3273c\") " pod="openstack/root-account-create-update-hmlzw" Jan 26 11:11:56 crc kubenswrapper[4619]: I0126 11:11:56.622274 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57e951c5-f0b1-49c3-8384-8e7a45d3273c-operator-scripts\") pod \"root-account-create-update-hmlzw\" (UID: \"57e951c5-f0b1-49c3-8384-8e7a45d3273c\") " pod="openstack/root-account-create-update-hmlzw" Jan 26 11:11:56 crc kubenswrapper[4619]: I0126 11:11:56.622963 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57e951c5-f0b1-49c3-8384-8e7a45d3273c-operator-scripts\") pod \"root-account-create-update-hmlzw\" (UID: \"57e951c5-f0b1-49c3-8384-8e7a45d3273c\") " pod="openstack/root-account-create-update-hmlzw" Jan 26 11:11:56 crc kubenswrapper[4619]: I0126 11:11:56.644470 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzf82\" (UniqueName: \"kubernetes.io/projected/57e951c5-f0b1-49c3-8384-8e7a45d3273c-kube-api-access-xzf82\") pod \"root-account-create-update-hmlzw\" (UID: \"57e951c5-f0b1-49c3-8384-8e7a45d3273c\") " pod="openstack/root-account-create-update-hmlzw" Jan 26 11:11:56 crc kubenswrapper[4619]: I0126 11:11:56.726257 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hmlzw" Jan 26 11:11:57 crc kubenswrapper[4619]: I0126 11:11:57.046423 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 26 11:11:57 crc kubenswrapper[4619]: I0126 11:11:57.192860 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hmlzw"] Jan 26 11:11:58 crc kubenswrapper[4619]: I0126 11:11:58.052871 4619 generic.go:334] "Generic (PLEG): container finished" podID="57e951c5-f0b1-49c3-8384-8e7a45d3273c" containerID="0c05180dee73982b1b82b9183be6a9cc7ba38482eb009fb0bc9d40d05ebcd000" exitCode=0 Jan 26 11:11:58 crc kubenswrapper[4619]: I0126 11:11:58.053207 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hmlzw" event={"ID":"57e951c5-f0b1-49c3-8384-8e7a45d3273c","Type":"ContainerDied","Data":"0c05180dee73982b1b82b9183be6a9cc7ba38482eb009fb0bc9d40d05ebcd000"} Jan 26 11:11:58 crc kubenswrapper[4619]: I0126 11:11:58.053326 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hmlzw" event={"ID":"57e951c5-f0b1-49c3-8384-8e7a45d3273c","Type":"ContainerStarted","Data":"276b358ebb9f959f158a26704bc681f02523e90309388d0ef8a76c866717e9b9"} Jan 26 11:11:58 crc kubenswrapper[4619]: I0126 11:11:58.478832 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e50c002e-11c3-4dc8-b32b-c962da06aecb-etc-swift\") pod \"swift-storage-0\" (UID: \"e50c002e-11c3-4dc8-b32b-c962da06aecb\") " pod="openstack/swift-storage-0" Jan 26 11:11:58 crc kubenswrapper[4619]: E0126 11:11:58.479054 4619 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 11:11:58 crc kubenswrapper[4619]: E0126 11:11:58.479082 4619 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 11:11:58 crc kubenswrapper[4619]: E0126 11:11:58.479140 4619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e50c002e-11c3-4dc8-b32b-c962da06aecb-etc-swift podName:e50c002e-11c3-4dc8-b32b-c962da06aecb nodeName:}" failed. No retries permitted until 2026-01-26 11:12:14.479120128 +0000 UTC m=+1033.513160834 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e50c002e-11c3-4dc8-b32b-c962da06aecb-etc-swift") pod "swift-storage-0" (UID: "e50c002e-11c3-4dc8-b32b-c962da06aecb") : configmap "swift-ring-files" not found Jan 26 11:11:58 crc kubenswrapper[4619]: I0126 11:11:58.708111 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-g9858"] Jan 26 11:11:58 crc kubenswrapper[4619]: I0126 11:11:58.709258 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g9858" Jan 26 11:11:58 crc kubenswrapper[4619]: I0126 11:11:58.746921 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-g9858"] Jan 26 11:11:58 crc kubenswrapper[4619]: I0126 11:11:58.784154 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgmss\" (UniqueName: \"kubernetes.io/projected/b5c70981-4cae-4170-a1ba-c7887aa5da2d-kube-api-access-rgmss\") pod \"keystone-db-create-g9858\" (UID: \"b5c70981-4cae-4170-a1ba-c7887aa5da2d\") " pod="openstack/keystone-db-create-g9858" Jan 26 11:11:58 crc kubenswrapper[4619]: I0126 11:11:58.784194 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5c70981-4cae-4170-a1ba-c7887aa5da2d-operator-scripts\") pod \"keystone-db-create-g9858\" (UID: \"b5c70981-4cae-4170-a1ba-c7887aa5da2d\") " pod="openstack/keystone-db-create-g9858" Jan 26 11:11:58 crc kubenswrapper[4619]: I0126 11:11:58.788806 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5c5a-account-create-update-l6h4b"] Jan 26 11:11:58 crc kubenswrapper[4619]: I0126 11:11:58.789846 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5c5a-account-create-update-l6h4b" Jan 26 11:11:58 crc kubenswrapper[4619]: I0126 11:11:58.791803 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 26 11:11:58 crc kubenswrapper[4619]: I0126 11:11:58.802195 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5c5a-account-create-update-l6h4b"] Jan 26 11:11:58 crc kubenswrapper[4619]: I0126 11:11:58.886474 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgmss\" (UniqueName: \"kubernetes.io/projected/b5c70981-4cae-4170-a1ba-c7887aa5da2d-kube-api-access-rgmss\") pod \"keystone-db-create-g9858\" (UID: \"b5c70981-4cae-4170-a1ba-c7887aa5da2d\") " pod="openstack/keystone-db-create-g9858" Jan 26 11:11:58 crc kubenswrapper[4619]: I0126 11:11:58.886757 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5c70981-4cae-4170-a1ba-c7887aa5da2d-operator-scripts\") pod \"keystone-db-create-g9858\" (UID: \"b5c70981-4cae-4170-a1ba-c7887aa5da2d\") " pod="openstack/keystone-db-create-g9858" Jan 26 11:11:58 crc kubenswrapper[4619]: I0126 11:11:58.887069 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpwpw\" (UniqueName: \"kubernetes.io/projected/ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2-kube-api-access-qpwpw\") pod \"keystone-5c5a-account-create-update-l6h4b\" (UID: \"ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2\") " pod="openstack/keystone-5c5a-account-create-update-l6h4b" Jan 26 11:11:58 crc kubenswrapper[4619]: I0126 11:11:58.887134 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2-operator-scripts\") pod \"keystone-5c5a-account-create-update-l6h4b\" (UID: \"ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2\") " pod="openstack/keystone-5c5a-account-create-update-l6h4b" Jan 26 11:11:58 crc kubenswrapper[4619]: I0126 11:11:58.887564 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5c70981-4cae-4170-a1ba-c7887aa5da2d-operator-scripts\") pod \"keystone-db-create-g9858\" (UID: \"b5c70981-4cae-4170-a1ba-c7887aa5da2d\") " pod="openstack/keystone-db-create-g9858" Jan 26 11:11:58 crc kubenswrapper[4619]: I0126 11:11:58.912221 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgmss\" (UniqueName: \"kubernetes.io/projected/b5c70981-4cae-4170-a1ba-c7887aa5da2d-kube-api-access-rgmss\") pod \"keystone-db-create-g9858\" (UID: \"b5c70981-4cae-4170-a1ba-c7887aa5da2d\") " pod="openstack/keystone-db-create-g9858" Jan 26 11:11:58 crc kubenswrapper[4619]: I0126 11:11:58.980310 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-lctj2"] Jan 26 11:11:58 crc kubenswrapper[4619]: I0126 11:11:58.981564 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lctj2" Jan 26 11:11:58 crc kubenswrapper[4619]: I0126 11:11:58.988502 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpwpw\" (UniqueName: \"kubernetes.io/projected/ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2-kube-api-access-qpwpw\") pod \"keystone-5c5a-account-create-update-l6h4b\" (UID: \"ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2\") " pod="openstack/keystone-5c5a-account-create-update-l6h4b" Jan 26 11:11:58 crc kubenswrapper[4619]: I0126 11:11:58.989342 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2-operator-scripts\") pod \"keystone-5c5a-account-create-update-l6h4b\" (UID: \"ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2\") " pod="openstack/keystone-5c5a-account-create-update-l6h4b" Jan 26 11:11:58 crc kubenswrapper[4619]: I0126 11:11:58.988613 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2-operator-scripts\") pod \"keystone-5c5a-account-create-update-l6h4b\" (UID: \"ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2\") " pod="openstack/keystone-5c5a-account-create-update-l6h4b" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.012715 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpwpw\" (UniqueName: \"kubernetes.io/projected/ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2-kube-api-access-qpwpw\") pod \"keystone-5c5a-account-create-update-l6h4b\" (UID: \"ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2\") " pod="openstack/keystone-5c5a-account-create-update-l6h4b" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.026765 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g9858" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.029423 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-lctj2"] Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.090806 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97ff8170-a280-4bb7-88dd-21f76bb168f3-operator-scripts\") pod \"placement-db-create-lctj2\" (UID: \"97ff8170-a280-4bb7-88dd-21f76bb168f3\") " pod="openstack/placement-db-create-lctj2" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.091075 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6987\" (UniqueName: \"kubernetes.io/projected/97ff8170-a280-4bb7-88dd-21f76bb168f3-kube-api-access-m6987\") pod \"placement-db-create-lctj2\" (UID: \"97ff8170-a280-4bb7-88dd-21f76bb168f3\") " pod="openstack/placement-db-create-lctj2" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.111019 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5c5a-account-create-update-l6h4b" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.120242 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-3208-account-create-update-cvfx5"] Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.123519 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3208-account-create-update-cvfx5" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.129933 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.131190 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3208-account-create-update-cvfx5"] Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.192174 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6987\" (UniqueName: \"kubernetes.io/projected/97ff8170-a280-4bb7-88dd-21f76bb168f3-kube-api-access-m6987\") pod \"placement-db-create-lctj2\" (UID: \"97ff8170-a280-4bb7-88dd-21f76bb168f3\") " pod="openstack/placement-db-create-lctj2" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.192290 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e875d8ac-1f96-4846-8f29-cd00cf7d86fe-operator-scripts\") pod \"placement-3208-account-create-update-cvfx5\" (UID: \"e875d8ac-1f96-4846-8f29-cd00cf7d86fe\") " pod="openstack/placement-3208-account-create-update-cvfx5" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.192342 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97ff8170-a280-4bb7-88dd-21f76bb168f3-operator-scripts\") pod \"placement-db-create-lctj2\" (UID: \"97ff8170-a280-4bb7-88dd-21f76bb168f3\") " pod="openstack/placement-db-create-lctj2" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.192365 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhtxc\" (UniqueName: \"kubernetes.io/projected/e875d8ac-1f96-4846-8f29-cd00cf7d86fe-kube-api-access-bhtxc\") pod \"placement-3208-account-create-update-cvfx5\" (UID: \"e875d8ac-1f96-4846-8f29-cd00cf7d86fe\") " pod="openstack/placement-3208-account-create-update-cvfx5" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.193300 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97ff8170-a280-4bb7-88dd-21f76bb168f3-operator-scripts\") pod \"placement-db-create-lctj2\" (UID: \"97ff8170-a280-4bb7-88dd-21f76bb168f3\") " pod="openstack/placement-db-create-lctj2" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.218439 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6987\" (UniqueName: \"kubernetes.io/projected/97ff8170-a280-4bb7-88dd-21f76bb168f3-kube-api-access-m6987\") pod \"placement-db-create-lctj2\" (UID: \"97ff8170-a280-4bb7-88dd-21f76bb168f3\") " pod="openstack/placement-db-create-lctj2" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.294485 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e875d8ac-1f96-4846-8f29-cd00cf7d86fe-operator-scripts\") pod \"placement-3208-account-create-update-cvfx5\" (UID: \"e875d8ac-1f96-4846-8f29-cd00cf7d86fe\") " pod="openstack/placement-3208-account-create-update-cvfx5" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.294562 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhtxc\" (UniqueName: \"kubernetes.io/projected/e875d8ac-1f96-4846-8f29-cd00cf7d86fe-kube-api-access-bhtxc\") pod \"placement-3208-account-create-update-cvfx5\" (UID: \"e875d8ac-1f96-4846-8f29-cd00cf7d86fe\") " pod="openstack/placement-3208-account-create-update-cvfx5" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.296256 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e875d8ac-1f96-4846-8f29-cd00cf7d86fe-operator-scripts\") pod \"placement-3208-account-create-update-cvfx5\" (UID: \"e875d8ac-1f96-4846-8f29-cd00cf7d86fe\") " pod="openstack/placement-3208-account-create-update-cvfx5" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.301094 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lctj2" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.318164 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhtxc\" (UniqueName: \"kubernetes.io/projected/e875d8ac-1f96-4846-8f29-cd00cf7d86fe-kube-api-access-bhtxc\") pod \"placement-3208-account-create-update-cvfx5\" (UID: \"e875d8ac-1f96-4846-8f29-cd00cf7d86fe\") " pod="openstack/placement-3208-account-create-update-cvfx5" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.355770 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-fwhc7"] Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.356834 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fwhc7" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.364654 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-fwhc7"] Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.409173 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt8s8\" (UniqueName: \"kubernetes.io/projected/2937b147-979a-43bf-8eab-467296040a2e-kube-api-access-kt8s8\") pod \"glance-db-create-fwhc7\" (UID: \"2937b147-979a-43bf-8eab-467296040a2e\") " pod="openstack/glance-db-create-fwhc7" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.409257 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2937b147-979a-43bf-8eab-467296040a2e-operator-scripts\") pod \"glance-db-create-fwhc7\" (UID: \"2937b147-979a-43bf-8eab-467296040a2e\") " pod="openstack/glance-db-create-fwhc7" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.471769 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-175c-account-create-update-76xvz"] Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.472831 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-175c-account-create-update-76xvz" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.484097 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.491211 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-175c-account-create-update-76xvz"] Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.514793 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt8s8\" (UniqueName: \"kubernetes.io/projected/2937b147-979a-43bf-8eab-467296040a2e-kube-api-access-kt8s8\") pod \"glance-db-create-fwhc7\" (UID: \"2937b147-979a-43bf-8eab-467296040a2e\") " pod="openstack/glance-db-create-fwhc7" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.514853 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2937b147-979a-43bf-8eab-467296040a2e-operator-scripts\") pod \"glance-db-create-fwhc7\" (UID: \"2937b147-979a-43bf-8eab-467296040a2e\") " pod="openstack/glance-db-create-fwhc7" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.516394 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2937b147-979a-43bf-8eab-467296040a2e-operator-scripts\") pod \"glance-db-create-fwhc7\" (UID: \"2937b147-979a-43bf-8eab-467296040a2e\") " pod="openstack/glance-db-create-fwhc7" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.536518 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt8s8\" (UniqueName: \"kubernetes.io/projected/2937b147-979a-43bf-8eab-467296040a2e-kube-api-access-kt8s8\") pod \"glance-db-create-fwhc7\" (UID: \"2937b147-979a-43bf-8eab-467296040a2e\") " pod="openstack/glance-db-create-fwhc7" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.543440 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3208-account-create-update-cvfx5" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.571535 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hmlzw" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.616865 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44jrz\" (UniqueName: \"kubernetes.io/projected/5fe9f8cc-1a25-4194-ab2b-4693561283e1-kube-api-access-44jrz\") pod \"glance-175c-account-create-update-76xvz\" (UID: \"5fe9f8cc-1a25-4194-ab2b-4693561283e1\") " pod="openstack/glance-175c-account-create-update-76xvz" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.616941 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fe9f8cc-1a25-4194-ab2b-4693561283e1-operator-scripts\") pod \"glance-175c-account-create-update-76xvz\" (UID: \"5fe9f8cc-1a25-4194-ab2b-4693561283e1\") " pod="openstack/glance-175c-account-create-update-76xvz" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.672570 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fwhc7" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.679687 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-g9858"] Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.712238 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5c5a-account-create-update-l6h4b"] Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.717428 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzf82\" (UniqueName: \"kubernetes.io/projected/57e951c5-f0b1-49c3-8384-8e7a45d3273c-kube-api-access-xzf82\") pod \"57e951c5-f0b1-49c3-8384-8e7a45d3273c\" (UID: \"57e951c5-f0b1-49c3-8384-8e7a45d3273c\") " Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.717467 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57e951c5-f0b1-49c3-8384-8e7a45d3273c-operator-scripts\") pod \"57e951c5-f0b1-49c3-8384-8e7a45d3273c\" (UID: \"57e951c5-f0b1-49c3-8384-8e7a45d3273c\") " Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.717956 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fe9f8cc-1a25-4194-ab2b-4693561283e1-operator-scripts\") pod \"glance-175c-account-create-update-76xvz\" (UID: \"5fe9f8cc-1a25-4194-ab2b-4693561283e1\") " pod="openstack/glance-175c-account-create-update-76xvz" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.718083 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44jrz\" (UniqueName: \"kubernetes.io/projected/5fe9f8cc-1a25-4194-ab2b-4693561283e1-kube-api-access-44jrz\") pod \"glance-175c-account-create-update-76xvz\" (UID: \"5fe9f8cc-1a25-4194-ab2b-4693561283e1\") " pod="openstack/glance-175c-account-create-update-76xvz" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.719171 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fe9f8cc-1a25-4194-ab2b-4693561283e1-operator-scripts\") pod \"glance-175c-account-create-update-76xvz\" (UID: \"5fe9f8cc-1a25-4194-ab2b-4693561283e1\") " pod="openstack/glance-175c-account-create-update-76xvz" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.719489 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57e951c5-f0b1-49c3-8384-8e7a45d3273c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "57e951c5-f0b1-49c3-8384-8e7a45d3273c" (UID: "57e951c5-f0b1-49c3-8384-8e7a45d3273c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.723738 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57e951c5-f0b1-49c3-8384-8e7a45d3273c-kube-api-access-xzf82" (OuterVolumeSpecName: "kube-api-access-xzf82") pod "57e951c5-f0b1-49c3-8384-8e7a45d3273c" (UID: "57e951c5-f0b1-49c3-8384-8e7a45d3273c"). InnerVolumeSpecName "kube-api-access-xzf82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.734961 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44jrz\" (UniqueName: \"kubernetes.io/projected/5fe9f8cc-1a25-4194-ab2b-4693561283e1-kube-api-access-44jrz\") pod \"glance-175c-account-create-update-76xvz\" (UID: \"5fe9f8cc-1a25-4194-ab2b-4693561283e1\") " pod="openstack/glance-175c-account-create-update-76xvz" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.813567 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-175c-account-create-update-76xvz" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.819336 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzf82\" (UniqueName: \"kubernetes.io/projected/57e951c5-f0b1-49c3-8384-8e7a45d3273c-kube-api-access-xzf82\") on node \"crc\" DevicePath \"\"" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.819366 4619 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57e951c5-f0b1-49c3-8384-8e7a45d3273c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:11:59 crc kubenswrapper[4619]: I0126 11:11:59.964355 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-lctj2"] Jan 26 11:12:00 crc kubenswrapper[4619]: W0126 11:12:00.034942 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97ff8170_a280_4bb7_88dd_21f76bb168f3.slice/crio-ccbeaabae17715ec4b3cea7ada2bcd07423da11525deba16f0a08ce28d341234 WatchSource:0}: Error finding container ccbeaabae17715ec4b3cea7ada2bcd07423da11525deba16f0a08ce28d341234: Status 404 returned error can't find the container with id ccbeaabae17715ec4b3cea7ada2bcd07423da11525deba16f0a08ce28d341234 Jan 26 11:12:00 crc kubenswrapper[4619]: I0126 11:12:00.049721 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3208-account-create-update-cvfx5"] Jan 26 11:12:00 crc kubenswrapper[4619]: W0126 11:12:00.061141 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode875d8ac_1f96_4846_8f29_cd00cf7d86fe.slice/crio-666bbe75e22523df0c1ac0accc0af0eefc003277a1e7fc818de04365672b7036 WatchSource:0}: Error finding container 666bbe75e22523df0c1ac0accc0af0eefc003277a1e7fc818de04365672b7036: Status 404 returned error can't find the container with id 666bbe75e22523df0c1ac0accc0af0eefc003277a1e7fc818de04365672b7036 Jan 26 11:12:00 crc kubenswrapper[4619]: I0126 11:12:00.087529 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-g9858" event={"ID":"b5c70981-4cae-4170-a1ba-c7887aa5da2d","Type":"ContainerStarted","Data":"5d22270acbca4e866d36321e1e44344aad360c39cb33700557dd0f735e485b16"} Jan 26 11:12:00 crc kubenswrapper[4619]: I0126 11:12:00.087580 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-g9858" event={"ID":"b5c70981-4cae-4170-a1ba-c7887aa5da2d","Type":"ContainerStarted","Data":"b43418c41fe9e5d46444180931ece860e842f0450630bbaa5a8c36441923625e"} Jan 26 11:12:00 crc kubenswrapper[4619]: I0126 11:12:00.094209 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hmlzw" Jan 26 11:12:00 crc kubenswrapper[4619]: I0126 11:12:00.095798 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hmlzw" event={"ID":"57e951c5-f0b1-49c3-8384-8e7a45d3273c","Type":"ContainerDied","Data":"276b358ebb9f959f158a26704bc681f02523e90309388d0ef8a76c866717e9b9"} Jan 26 11:12:00 crc kubenswrapper[4619]: I0126 11:12:00.095864 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="276b358ebb9f959f158a26704bc681f02523e90309388d0ef8a76c866717e9b9" Jan 26 11:12:00 crc kubenswrapper[4619]: I0126 11:12:00.098165 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5c5a-account-create-update-l6h4b" event={"ID":"ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2","Type":"ContainerStarted","Data":"feb5275205b3ec51260062f4b01098d1e1579f9c81cc200e97c8eea2bad8f970"} Jan 26 11:12:00 crc kubenswrapper[4619]: I0126 11:12:00.098217 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5c5a-account-create-update-l6h4b" event={"ID":"ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2","Type":"ContainerStarted","Data":"580d0797f4bbc44dc379fbe5125008c0d187c5259a3053f6047099c13bf47cb6"} Jan 26 11:12:00 crc kubenswrapper[4619]: I0126 11:12:00.106352 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-g9858" podStartSLOduration=2.106330198 podStartE2EDuration="2.106330198s" podCreationTimestamp="2026-01-26 11:11:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:12:00.10566345 +0000 UTC m=+1019.139704166" watchObservedRunningTime="2026-01-26 11:12:00.106330198 +0000 UTC m=+1019.140370914" Jan 26 11:12:00 crc kubenswrapper[4619]: I0126 11:12:00.107019 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lctj2" event={"ID":"97ff8170-a280-4bb7-88dd-21f76bb168f3","Type":"ContainerStarted","Data":"ccbeaabae17715ec4b3cea7ada2bcd07423da11525deba16f0a08ce28d341234"} Jan 26 11:12:00 crc kubenswrapper[4619]: I0126 11:12:00.120967 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3208-account-create-update-cvfx5" event={"ID":"e875d8ac-1f96-4846-8f29-cd00cf7d86fe","Type":"ContainerStarted","Data":"666bbe75e22523df0c1ac0accc0af0eefc003277a1e7fc818de04365672b7036"} Jan 26 11:12:00 crc kubenswrapper[4619]: I0126 11:12:00.158901 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5c5a-account-create-update-l6h4b" podStartSLOduration=2.158874121 podStartE2EDuration="2.158874121s" podCreationTimestamp="2026-01-26 11:11:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:12:00.138229426 +0000 UTC m=+1019.172270142" watchObservedRunningTime="2026-01-26 11:12:00.158874121 +0000 UTC m=+1019.192914847" Jan 26 11:12:00 crc kubenswrapper[4619]: I0126 11:12:00.228918 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-fwhc7"] Jan 26 11:12:00 crc kubenswrapper[4619]: W0126 11:12:00.244589 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2937b147_979a_43bf_8eab_467296040a2e.slice/crio-de94d1cbb0ce7af008430972e1928fc6b8f29a8d343c5c3564b02f8e5b340093 WatchSource:0}: Error finding container de94d1cbb0ce7af008430972e1928fc6b8f29a8d343c5c3564b02f8e5b340093: Status 404 returned error can't find the container with id de94d1cbb0ce7af008430972e1928fc6b8f29a8d343c5c3564b02f8e5b340093 Jan 26 11:12:00 crc kubenswrapper[4619]: I0126 11:12:00.382171 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-175c-account-create-update-76xvz"] Jan 26 11:12:00 crc kubenswrapper[4619]: W0126 11:12:00.396856 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5fe9f8cc_1a25_4194_ab2b_4693561283e1.slice/crio-368a874bf4fa749960514aa9358be00b9c3972dfb932f563ab74ceb29580d397 WatchSource:0}: Error finding container 368a874bf4fa749960514aa9358be00b9c3972dfb932f563ab74ceb29580d397: Status 404 returned error can't find the container with id 368a874bf4fa749960514aa9358be00b9c3972dfb932f563ab74ceb29580d397 Jan 26 11:12:01 crc kubenswrapper[4619]: I0126 11:12:01.130791 4619 generic.go:334] "Generic (PLEG): container finished" podID="b5c70981-4cae-4170-a1ba-c7887aa5da2d" containerID="5d22270acbca4e866d36321e1e44344aad360c39cb33700557dd0f735e485b16" exitCode=0 Jan 26 11:12:01 crc kubenswrapper[4619]: I0126 11:12:01.130995 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-g9858" event={"ID":"b5c70981-4cae-4170-a1ba-c7887aa5da2d","Type":"ContainerDied","Data":"5d22270acbca4e866d36321e1e44344aad360c39cb33700557dd0f735e485b16"} Jan 26 11:12:01 crc kubenswrapper[4619]: I0126 11:12:01.133374 4619 generic.go:334] "Generic (PLEG): container finished" podID="5fe9f8cc-1a25-4194-ab2b-4693561283e1" containerID="673fe103122cf652a0f5b926a3f7f696fcf3bdcf8dbb86d9b3505a7ad68c0492" exitCode=0 Jan 26 11:12:01 crc kubenswrapper[4619]: I0126 11:12:01.133442 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-175c-account-create-update-76xvz" event={"ID":"5fe9f8cc-1a25-4194-ab2b-4693561283e1","Type":"ContainerDied","Data":"673fe103122cf652a0f5b926a3f7f696fcf3bdcf8dbb86d9b3505a7ad68c0492"} Jan 26 11:12:01 crc kubenswrapper[4619]: I0126 11:12:01.133470 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-175c-account-create-update-76xvz" event={"ID":"5fe9f8cc-1a25-4194-ab2b-4693561283e1","Type":"ContainerStarted","Data":"368a874bf4fa749960514aa9358be00b9c3972dfb932f563ab74ceb29580d397"} Jan 26 11:12:01 crc kubenswrapper[4619]: I0126 11:12:01.136718 4619 generic.go:334] "Generic (PLEG): container finished" podID="ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2" containerID="feb5275205b3ec51260062f4b01098d1e1579f9c81cc200e97c8eea2bad8f970" exitCode=0 Jan 26 11:12:01 crc kubenswrapper[4619]: I0126 11:12:01.136816 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5c5a-account-create-update-l6h4b" event={"ID":"ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2","Type":"ContainerDied","Data":"feb5275205b3ec51260062f4b01098d1e1579f9c81cc200e97c8eea2bad8f970"} Jan 26 11:12:01 crc kubenswrapper[4619]: I0126 11:12:01.139539 4619 generic.go:334] "Generic (PLEG): container finished" podID="97ff8170-a280-4bb7-88dd-21f76bb168f3" containerID="93d3c48fb3d786f7ec6053258f0469cf2c48ebf19628ab140ea918ec05229ca5" exitCode=0 Jan 26 11:12:01 crc kubenswrapper[4619]: I0126 11:12:01.139654 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lctj2" event={"ID":"97ff8170-a280-4bb7-88dd-21f76bb168f3","Type":"ContainerDied","Data":"93d3c48fb3d786f7ec6053258f0469cf2c48ebf19628ab140ea918ec05229ca5"} Jan 26 11:12:01 crc kubenswrapper[4619]: I0126 11:12:01.148377 4619 generic.go:334] "Generic (PLEG): container finished" podID="2937b147-979a-43bf-8eab-467296040a2e" containerID="d3a53c80677671d1350c62ef8936a1bfa5a9e5620a002d16bf33ea60a8f75e8a" exitCode=0 Jan 26 11:12:01 crc kubenswrapper[4619]: I0126 11:12:01.148485 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fwhc7" event={"ID":"2937b147-979a-43bf-8eab-467296040a2e","Type":"ContainerDied","Data":"d3a53c80677671d1350c62ef8936a1bfa5a9e5620a002d16bf33ea60a8f75e8a"} Jan 26 11:12:01 crc kubenswrapper[4619]: I0126 11:12:01.148524 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fwhc7" event={"ID":"2937b147-979a-43bf-8eab-467296040a2e","Type":"ContainerStarted","Data":"de94d1cbb0ce7af008430972e1928fc6b8f29a8d343c5c3564b02f8e5b340093"} Jan 26 11:12:01 crc kubenswrapper[4619]: I0126 11:12:01.150915 4619 generic.go:334] "Generic (PLEG): container finished" podID="e875d8ac-1f96-4846-8f29-cd00cf7d86fe" containerID="240c0a32e20dc33444985d528805d5c6ff200e99fb9f87a1becaa491a9eeb853" exitCode=0 Jan 26 11:12:01 crc kubenswrapper[4619]: I0126 11:12:01.151081 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3208-account-create-update-cvfx5" event={"ID":"e875d8ac-1f96-4846-8f29-cd00cf7d86fe","Type":"ContainerDied","Data":"240c0a32e20dc33444985d528805d5c6ff200e99fb9f87a1becaa491a9eeb853"} Jan 26 11:12:02 crc kubenswrapper[4619]: I0126 11:12:02.160803 4619 generic.go:334] "Generic (PLEG): container finished" podID="ae54b20c-f51c-4b68-9f71-0748e5ba0c32" containerID="3a2c8140f98185a65f6ce2ebeada7ade63d451ca0877a1340072309817178a75" exitCode=0 Jan 26 11:12:02 crc kubenswrapper[4619]: I0126 11:12:02.160888 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9swh4" event={"ID":"ae54b20c-f51c-4b68-9f71-0748e5ba0c32","Type":"ContainerDied","Data":"3a2c8140f98185a65f6ce2ebeada7ade63d451ca0877a1340072309817178a75"} Jan 26 11:12:02 crc kubenswrapper[4619]: I0126 11:12:02.560871 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-175c-account-create-update-76xvz" Jan 26 11:12:02 crc kubenswrapper[4619]: I0126 11:12:02.650526 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-hmlzw"] Jan 26 11:12:02 crc kubenswrapper[4619]: I0126 11:12:02.661027 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-hmlzw"] Jan 26 11:12:02 crc kubenswrapper[4619]: I0126 11:12:02.698265 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fe9f8cc-1a25-4194-ab2b-4693561283e1-operator-scripts\") pod \"5fe9f8cc-1a25-4194-ab2b-4693561283e1\" (UID: \"5fe9f8cc-1a25-4194-ab2b-4693561283e1\") " Jan 26 11:12:02 crc kubenswrapper[4619]: I0126 11:12:02.698340 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44jrz\" (UniqueName: \"kubernetes.io/projected/5fe9f8cc-1a25-4194-ab2b-4693561283e1-kube-api-access-44jrz\") pod \"5fe9f8cc-1a25-4194-ab2b-4693561283e1\" (UID: \"5fe9f8cc-1a25-4194-ab2b-4693561283e1\") " Jan 26 11:12:02 crc kubenswrapper[4619]: I0126 11:12:02.699217 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fe9f8cc-1a25-4194-ab2b-4693561283e1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5fe9f8cc-1a25-4194-ab2b-4693561283e1" (UID: "5fe9f8cc-1a25-4194-ab2b-4693561283e1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:02 crc kubenswrapper[4619]: I0126 11:12:02.723850 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe9f8cc-1a25-4194-ab2b-4693561283e1-kube-api-access-44jrz" (OuterVolumeSpecName: "kube-api-access-44jrz") pod "5fe9f8cc-1a25-4194-ab2b-4693561283e1" (UID: "5fe9f8cc-1a25-4194-ab2b-4693561283e1"). InnerVolumeSpecName "kube-api-access-44jrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:12:02 crc kubenswrapper[4619]: I0126 11:12:02.801510 4619 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fe9f8cc-1a25-4194-ab2b-4693561283e1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:02 crc kubenswrapper[4619]: I0126 11:12:02.801536 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44jrz\" (UniqueName: \"kubernetes.io/projected/5fe9f8cc-1a25-4194-ab2b-4693561283e1-kube-api-access-44jrz\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:02 crc kubenswrapper[4619]: I0126 11:12:02.858587 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lctj2" Jan 26 11:12:02 crc kubenswrapper[4619]: I0126 11:12:02.862969 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5c5a-account-create-update-l6h4b" Jan 26 11:12:02 crc kubenswrapper[4619]: I0126 11:12:02.866944 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3208-account-create-update-cvfx5" Jan 26 11:12:02 crc kubenswrapper[4619]: I0126 11:12:02.884289 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g9858" Jan 26 11:12:02 crc kubenswrapper[4619]: I0126 11:12:02.891572 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fwhc7" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.006376 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5c70981-4cae-4170-a1ba-c7887aa5da2d-operator-scripts\") pod \"b5c70981-4cae-4170-a1ba-c7887aa5da2d\" (UID: \"b5c70981-4cae-4170-a1ba-c7887aa5da2d\") " Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.006646 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6987\" (UniqueName: \"kubernetes.io/projected/97ff8170-a280-4bb7-88dd-21f76bb168f3-kube-api-access-m6987\") pod \"97ff8170-a280-4bb7-88dd-21f76bb168f3\" (UID: \"97ff8170-a280-4bb7-88dd-21f76bb168f3\") " Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.006681 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2937b147-979a-43bf-8eab-467296040a2e-operator-scripts\") pod \"2937b147-979a-43bf-8eab-467296040a2e\" (UID: \"2937b147-979a-43bf-8eab-467296040a2e\") " Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.006704 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2-operator-scripts\") pod \"ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2\" (UID: \"ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2\") " Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.006751 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpwpw\" (UniqueName: \"kubernetes.io/projected/ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2-kube-api-access-qpwpw\") pod \"ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2\" (UID: \"ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2\") " Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.006777 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt8s8\" (UniqueName: \"kubernetes.io/projected/2937b147-979a-43bf-8eab-467296040a2e-kube-api-access-kt8s8\") pod \"2937b147-979a-43bf-8eab-467296040a2e\" (UID: \"2937b147-979a-43bf-8eab-467296040a2e\") " Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.006796 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgmss\" (UniqueName: \"kubernetes.io/projected/b5c70981-4cae-4170-a1ba-c7887aa5da2d-kube-api-access-rgmss\") pod \"b5c70981-4cae-4170-a1ba-c7887aa5da2d\" (UID: \"b5c70981-4cae-4170-a1ba-c7887aa5da2d\") " Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.006820 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e875d8ac-1f96-4846-8f29-cd00cf7d86fe-operator-scripts\") pod \"e875d8ac-1f96-4846-8f29-cd00cf7d86fe\" (UID: \"e875d8ac-1f96-4846-8f29-cd00cf7d86fe\") " Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.006840 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5c70981-4cae-4170-a1ba-c7887aa5da2d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b5c70981-4cae-4170-a1ba-c7887aa5da2d" (UID: "b5c70981-4cae-4170-a1ba-c7887aa5da2d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.006876 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhtxc\" (UniqueName: \"kubernetes.io/projected/e875d8ac-1f96-4846-8f29-cd00cf7d86fe-kube-api-access-bhtxc\") pod \"e875d8ac-1f96-4846-8f29-cd00cf7d86fe\" (UID: \"e875d8ac-1f96-4846-8f29-cd00cf7d86fe\") " Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.006913 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97ff8170-a280-4bb7-88dd-21f76bb168f3-operator-scripts\") pod \"97ff8170-a280-4bb7-88dd-21f76bb168f3\" (UID: \"97ff8170-a280-4bb7-88dd-21f76bb168f3\") " Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.007247 4619 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5c70981-4cae-4170-a1ba-c7887aa5da2d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.007715 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97ff8170-a280-4bb7-88dd-21f76bb168f3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "97ff8170-a280-4bb7-88dd-21f76bb168f3" (UID: "97ff8170-a280-4bb7-88dd-21f76bb168f3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.007805 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2937b147-979a-43bf-8eab-467296040a2e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2937b147-979a-43bf-8eab-467296040a2e" (UID: "2937b147-979a-43bf-8eab-467296040a2e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.008006 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2" (UID: "ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.008137 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e875d8ac-1f96-4846-8f29-cd00cf7d86fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e875d8ac-1f96-4846-8f29-cd00cf7d86fe" (UID: "e875d8ac-1f96-4846-8f29-cd00cf7d86fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.010873 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2-kube-api-access-qpwpw" (OuterVolumeSpecName: "kube-api-access-qpwpw") pod "ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2" (UID: "ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2"). InnerVolumeSpecName "kube-api-access-qpwpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.011286 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97ff8170-a280-4bb7-88dd-21f76bb168f3-kube-api-access-m6987" (OuterVolumeSpecName: "kube-api-access-m6987") pod "97ff8170-a280-4bb7-88dd-21f76bb168f3" (UID: "97ff8170-a280-4bb7-88dd-21f76bb168f3"). InnerVolumeSpecName "kube-api-access-m6987". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.011683 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5c70981-4cae-4170-a1ba-c7887aa5da2d-kube-api-access-rgmss" (OuterVolumeSpecName: "kube-api-access-rgmss") pod "b5c70981-4cae-4170-a1ba-c7887aa5da2d" (UID: "b5c70981-4cae-4170-a1ba-c7887aa5da2d"). InnerVolumeSpecName "kube-api-access-rgmss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.012078 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e875d8ac-1f96-4846-8f29-cd00cf7d86fe-kube-api-access-bhtxc" (OuterVolumeSpecName: "kube-api-access-bhtxc") pod "e875d8ac-1f96-4846-8f29-cd00cf7d86fe" (UID: "e875d8ac-1f96-4846-8f29-cd00cf7d86fe"). InnerVolumeSpecName "kube-api-access-bhtxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.012269 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2937b147-979a-43bf-8eab-467296040a2e-kube-api-access-kt8s8" (OuterVolumeSpecName: "kube-api-access-kt8s8") pod "2937b147-979a-43bf-8eab-467296040a2e" (UID: "2937b147-979a-43bf-8eab-467296040a2e"). InnerVolumeSpecName "kube-api-access-kt8s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.109063 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhtxc\" (UniqueName: \"kubernetes.io/projected/e875d8ac-1f96-4846-8f29-cd00cf7d86fe-kube-api-access-bhtxc\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.109107 4619 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97ff8170-a280-4bb7-88dd-21f76bb168f3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.109126 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6987\" (UniqueName: \"kubernetes.io/projected/97ff8170-a280-4bb7-88dd-21f76bb168f3-kube-api-access-m6987\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.109144 4619 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2937b147-979a-43bf-8eab-467296040a2e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.109161 4619 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.109177 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpwpw\" (UniqueName: \"kubernetes.io/projected/ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2-kube-api-access-qpwpw\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.109194 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt8s8\" (UniqueName: \"kubernetes.io/projected/2937b147-979a-43bf-8eab-467296040a2e-kube-api-access-kt8s8\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.109211 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgmss\" (UniqueName: \"kubernetes.io/projected/b5c70981-4cae-4170-a1ba-c7887aa5da2d-kube-api-access-rgmss\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.109228 4619 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e875d8ac-1f96-4846-8f29-cd00cf7d86fe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.173216 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lctj2" event={"ID":"97ff8170-a280-4bb7-88dd-21f76bb168f3","Type":"ContainerDied","Data":"ccbeaabae17715ec4b3cea7ada2bcd07423da11525deba16f0a08ce28d341234"} Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.173257 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccbeaabae17715ec4b3cea7ada2bcd07423da11525deba16f0a08ce28d341234" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.173307 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lctj2" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.175506 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fwhc7" event={"ID":"2937b147-979a-43bf-8eab-467296040a2e","Type":"ContainerDied","Data":"de94d1cbb0ce7af008430972e1928fc6b8f29a8d343c5c3564b02f8e5b340093"} Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.175565 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de94d1cbb0ce7af008430972e1928fc6b8f29a8d343c5c3564b02f8e5b340093" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.175676 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fwhc7" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.189107 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3208-account-create-update-cvfx5" event={"ID":"e875d8ac-1f96-4846-8f29-cd00cf7d86fe","Type":"ContainerDied","Data":"666bbe75e22523df0c1ac0accc0af0eefc003277a1e7fc818de04365672b7036"} Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.189232 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="666bbe75e22523df0c1ac0accc0af0eefc003277a1e7fc818de04365672b7036" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.189842 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3208-account-create-update-cvfx5" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.191333 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-g9858" event={"ID":"b5c70981-4cae-4170-a1ba-c7887aa5da2d","Type":"ContainerDied","Data":"b43418c41fe9e5d46444180931ece860e842f0450630bbaa5a8c36441923625e"} Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.191377 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b43418c41fe9e5d46444180931ece860e842f0450630bbaa5a8c36441923625e" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.191475 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g9858" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.199704 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-175c-account-create-update-76xvz" event={"ID":"5fe9f8cc-1a25-4194-ab2b-4693561283e1","Type":"ContainerDied","Data":"368a874bf4fa749960514aa9358be00b9c3972dfb932f563ab74ceb29580d397"} Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.199742 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="368a874bf4fa749960514aa9358be00b9c3972dfb932f563ab74ceb29580d397" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.199807 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-175c-account-create-update-76xvz" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.205451 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5c5a-account-create-update-l6h4b" event={"ID":"ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2","Type":"ContainerDied","Data":"580d0797f4bbc44dc379fbe5125008c0d187c5259a3053f6047099c13bf47cb6"} Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.205512 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="580d0797f4bbc44dc379fbe5125008c0d187c5259a3053f6047099c13bf47cb6" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.205480 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5c5a-account-create-update-l6h4b" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.297911 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57e951c5-f0b1-49c3-8384-8e7a45d3273c" path="/var/lib/kubelet/pods/57e951c5-f0b1-49c3-8384-8e7a45d3273c/volumes" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.451526 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9swh4" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.514232 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-etc-swift\") pod \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\" (UID: \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\") " Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.514331 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-ring-data-devices\") pod \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\" (UID: \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\") " Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.514470 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-combined-ca-bundle\") pod \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\" (UID: \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\") " Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.514520 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-swiftconf\") pod \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\" (UID: \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\") " Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.514546 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-scripts\") pod \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\" (UID: \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\") " Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.514575 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-dispersionconf\") pod \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\" (UID: \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\") " Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.514634 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lh4d\" (UniqueName: \"kubernetes.io/projected/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-kube-api-access-2lh4d\") pod \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\" (UID: \"ae54b20c-f51c-4b68-9f71-0748e5ba0c32\") " Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.516698 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "ae54b20c-f51c-4b68-9f71-0748e5ba0c32" (UID: "ae54b20c-f51c-4b68-9f71-0748e5ba0c32"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.517405 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "ae54b20c-f51c-4b68-9f71-0748e5ba0c32" (UID: "ae54b20c-f51c-4b68-9f71-0748e5ba0c32"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.519664 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-kube-api-access-2lh4d" (OuterVolumeSpecName: "kube-api-access-2lh4d") pod "ae54b20c-f51c-4b68-9f71-0748e5ba0c32" (UID: "ae54b20c-f51c-4b68-9f71-0748e5ba0c32"). InnerVolumeSpecName "kube-api-access-2lh4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.523300 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "ae54b20c-f51c-4b68-9f71-0748e5ba0c32" (UID: "ae54b20c-f51c-4b68-9f71-0748e5ba0c32"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.535907 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae54b20c-f51c-4b68-9f71-0748e5ba0c32" (UID: "ae54b20c-f51c-4b68-9f71-0748e5ba0c32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.537130 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-scripts" (OuterVolumeSpecName: "scripts") pod "ae54b20c-f51c-4b68-9f71-0748e5ba0c32" (UID: "ae54b20c-f51c-4b68-9f71-0748e5ba0c32"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.542204 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "ae54b20c-f51c-4b68-9f71-0748e5ba0c32" (UID: "ae54b20c-f51c-4b68-9f71-0748e5ba0c32"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.617009 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.617048 4619 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.617061 4619 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.617073 4619 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.617088 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lh4d\" (UniqueName: \"kubernetes.io/projected/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-kube-api-access-2lh4d\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.617102 4619 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:03 crc kubenswrapper[4619]: I0126 11:12:03.617113 4619 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ae54b20c-f51c-4b68-9f71-0748e5ba0c32-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.214519 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9swh4" event={"ID":"ae54b20c-f51c-4b68-9f71-0748e5ba0c32","Type":"ContainerDied","Data":"40bf64d905fa59e9b4c654f9a54686e5781f344a8c56206bbffb57b339574a45"} Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.215270 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40bf64d905fa59e9b4c654f9a54686e5781f344a8c56206bbffb57b339574a45" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.214560 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9swh4" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.653177 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-8bqkm"] Jan 26 11:12:04 crc kubenswrapper[4619]: E0126 11:12:04.653512 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e875d8ac-1f96-4846-8f29-cd00cf7d86fe" containerName="mariadb-account-create-update" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.653525 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="e875d8ac-1f96-4846-8f29-cd00cf7d86fe" containerName="mariadb-account-create-update" Jan 26 11:12:04 crc kubenswrapper[4619]: E0126 11:12:04.653559 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5c70981-4cae-4170-a1ba-c7887aa5da2d" containerName="mariadb-database-create" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.653567 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5c70981-4cae-4170-a1ba-c7887aa5da2d" containerName="mariadb-database-create" Jan 26 11:12:04 crc kubenswrapper[4619]: E0126 11:12:04.653584 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2" containerName="mariadb-account-create-update" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.653594 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2" containerName="mariadb-account-create-update" Jan 26 11:12:04 crc kubenswrapper[4619]: E0126 11:12:04.653605 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57e951c5-f0b1-49c3-8384-8e7a45d3273c" containerName="mariadb-account-create-update" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.653616 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="57e951c5-f0b1-49c3-8384-8e7a45d3273c" containerName="mariadb-account-create-update" Jan 26 11:12:04 crc kubenswrapper[4619]: E0126 11:12:04.653651 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fe9f8cc-1a25-4194-ab2b-4693561283e1" containerName="mariadb-account-create-update" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.653659 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fe9f8cc-1a25-4194-ab2b-4693561283e1" containerName="mariadb-account-create-update" Jan 26 11:12:04 crc kubenswrapper[4619]: E0126 11:12:04.653673 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae54b20c-f51c-4b68-9f71-0748e5ba0c32" containerName="swift-ring-rebalance" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.653681 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae54b20c-f51c-4b68-9f71-0748e5ba0c32" containerName="swift-ring-rebalance" Jan 26 11:12:04 crc kubenswrapper[4619]: E0126 11:12:04.653697 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97ff8170-a280-4bb7-88dd-21f76bb168f3" containerName="mariadb-database-create" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.653705 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="97ff8170-a280-4bb7-88dd-21f76bb168f3" containerName="mariadb-database-create" Jan 26 11:12:04 crc kubenswrapper[4619]: E0126 11:12:04.653716 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2937b147-979a-43bf-8eab-467296040a2e" containerName="mariadb-database-create" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.653723 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="2937b147-979a-43bf-8eab-467296040a2e" containerName="mariadb-database-create" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.653888 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fe9f8cc-1a25-4194-ab2b-4693561283e1" containerName="mariadb-account-create-update" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.653901 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="57e951c5-f0b1-49c3-8384-8e7a45d3273c" containerName="mariadb-account-create-update" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.653918 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="97ff8170-a280-4bb7-88dd-21f76bb168f3" containerName="mariadb-database-create" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.653930 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2" containerName="mariadb-account-create-update" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.653944 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5c70981-4cae-4170-a1ba-c7887aa5da2d" containerName="mariadb-database-create" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.653960 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="2937b147-979a-43bf-8eab-467296040a2e" containerName="mariadb-database-create" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.653970 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae54b20c-f51c-4b68-9f71-0748e5ba0c32" containerName="swift-ring-rebalance" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.653984 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="e875d8ac-1f96-4846-8f29-cd00cf7d86fe" containerName="mariadb-account-create-update" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.654574 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8bqkm" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.657587 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-hvljd" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.657760 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.666758 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-8bqkm"] Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.739239 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad779660-a430-4b5c-91dd-41b9582a4215-config-data\") pod \"glance-db-sync-8bqkm\" (UID: \"ad779660-a430-4b5c-91dd-41b9582a4215\") " pod="openstack/glance-db-sync-8bqkm" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.739537 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st9ps\" (UniqueName: \"kubernetes.io/projected/ad779660-a430-4b5c-91dd-41b9582a4215-kube-api-access-st9ps\") pod \"glance-db-sync-8bqkm\" (UID: \"ad779660-a430-4b5c-91dd-41b9582a4215\") " pod="openstack/glance-db-sync-8bqkm" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.739561 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad779660-a430-4b5c-91dd-41b9582a4215-db-sync-config-data\") pod \"glance-db-sync-8bqkm\" (UID: \"ad779660-a430-4b5c-91dd-41b9582a4215\") " pod="openstack/glance-db-sync-8bqkm" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.739586 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad779660-a430-4b5c-91dd-41b9582a4215-combined-ca-bundle\") pod \"glance-db-sync-8bqkm\" (UID: \"ad779660-a430-4b5c-91dd-41b9582a4215\") " pod="openstack/glance-db-sync-8bqkm" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.840885 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad779660-a430-4b5c-91dd-41b9582a4215-combined-ca-bundle\") pod \"glance-db-sync-8bqkm\" (UID: \"ad779660-a430-4b5c-91dd-41b9582a4215\") " pod="openstack/glance-db-sync-8bqkm" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.841075 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad779660-a430-4b5c-91dd-41b9582a4215-config-data\") pod \"glance-db-sync-8bqkm\" (UID: \"ad779660-a430-4b5c-91dd-41b9582a4215\") " pod="openstack/glance-db-sync-8bqkm" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.841124 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st9ps\" (UniqueName: \"kubernetes.io/projected/ad779660-a430-4b5c-91dd-41b9582a4215-kube-api-access-st9ps\") pod \"glance-db-sync-8bqkm\" (UID: \"ad779660-a430-4b5c-91dd-41b9582a4215\") " pod="openstack/glance-db-sync-8bqkm" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.841153 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad779660-a430-4b5c-91dd-41b9582a4215-db-sync-config-data\") pod \"glance-db-sync-8bqkm\" (UID: \"ad779660-a430-4b5c-91dd-41b9582a4215\") " pod="openstack/glance-db-sync-8bqkm" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.844903 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad779660-a430-4b5c-91dd-41b9582a4215-combined-ca-bundle\") pod \"glance-db-sync-8bqkm\" (UID: \"ad779660-a430-4b5c-91dd-41b9582a4215\") " pod="openstack/glance-db-sync-8bqkm" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.844973 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad779660-a430-4b5c-91dd-41b9582a4215-config-data\") pod \"glance-db-sync-8bqkm\" (UID: \"ad779660-a430-4b5c-91dd-41b9582a4215\") " pod="openstack/glance-db-sync-8bqkm" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.847713 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad779660-a430-4b5c-91dd-41b9582a4215-db-sync-config-data\") pod \"glance-db-sync-8bqkm\" (UID: \"ad779660-a430-4b5c-91dd-41b9582a4215\") " pod="openstack/glance-db-sync-8bqkm" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.861103 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st9ps\" (UniqueName: \"kubernetes.io/projected/ad779660-a430-4b5c-91dd-41b9582a4215-kube-api-access-st9ps\") pod \"glance-db-sync-8bqkm\" (UID: \"ad779660-a430-4b5c-91dd-41b9582a4215\") " pod="openstack/glance-db-sync-8bqkm" Jan 26 11:12:04 crc kubenswrapper[4619]: I0126 11:12:04.980918 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8bqkm" Jan 26 11:12:05 crc kubenswrapper[4619]: I0126 11:12:05.762746 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-8bqkm"] Jan 26 11:12:06 crc kubenswrapper[4619]: I0126 11:12:06.230081 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8bqkm" event={"ID":"ad779660-a430-4b5c-91dd-41b9582a4215","Type":"ContainerStarted","Data":"2ca7b8222f2f2a766ddf76a24f605b407f9508393b5f4926f5af74075781a690"} Jan 26 11:12:07 crc kubenswrapper[4619]: I0126 11:12:07.648099 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-swzd7"] Jan 26 11:12:07 crc kubenswrapper[4619]: I0126 11:12:07.649584 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-swzd7" Jan 26 11:12:07 crc kubenswrapper[4619]: I0126 11:12:07.652722 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 26 11:12:07 crc kubenswrapper[4619]: I0126 11:12:07.659902 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-swzd7"] Jan 26 11:12:07 crc kubenswrapper[4619]: I0126 11:12:07.797775 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g76v\" (UniqueName: \"kubernetes.io/projected/ee547519-04e8-4ff3-be01-6d7783e3a4d3-kube-api-access-9g76v\") pod \"root-account-create-update-swzd7\" (UID: \"ee547519-04e8-4ff3-be01-6d7783e3a4d3\") " pod="openstack/root-account-create-update-swzd7" Jan 26 11:12:07 crc kubenswrapper[4619]: I0126 11:12:07.797862 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee547519-04e8-4ff3-be01-6d7783e3a4d3-operator-scripts\") pod \"root-account-create-update-swzd7\" (UID: \"ee547519-04e8-4ff3-be01-6d7783e3a4d3\") " pod="openstack/root-account-create-update-swzd7" Jan 26 11:12:07 crc kubenswrapper[4619]: I0126 11:12:07.899647 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g76v\" (UniqueName: \"kubernetes.io/projected/ee547519-04e8-4ff3-be01-6d7783e3a4d3-kube-api-access-9g76v\") pod \"root-account-create-update-swzd7\" (UID: \"ee547519-04e8-4ff3-be01-6d7783e3a4d3\") " pod="openstack/root-account-create-update-swzd7" Jan 26 11:12:07 crc kubenswrapper[4619]: I0126 11:12:07.899723 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee547519-04e8-4ff3-be01-6d7783e3a4d3-operator-scripts\") pod \"root-account-create-update-swzd7\" (UID: \"ee547519-04e8-4ff3-be01-6d7783e3a4d3\") " pod="openstack/root-account-create-update-swzd7" Jan 26 11:12:07 crc kubenswrapper[4619]: I0126 11:12:07.900516 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee547519-04e8-4ff3-be01-6d7783e3a4d3-operator-scripts\") pod \"root-account-create-update-swzd7\" (UID: \"ee547519-04e8-4ff3-be01-6d7783e3a4d3\") " pod="openstack/root-account-create-update-swzd7" Jan 26 11:12:07 crc kubenswrapper[4619]: I0126 11:12:07.935315 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g76v\" (UniqueName: \"kubernetes.io/projected/ee547519-04e8-4ff3-be01-6d7783e3a4d3-kube-api-access-9g76v\") pod \"root-account-create-update-swzd7\" (UID: \"ee547519-04e8-4ff3-be01-6d7783e3a4d3\") " pod="openstack/root-account-create-update-swzd7" Jan 26 11:12:07 crc kubenswrapper[4619]: I0126 11:12:07.974060 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-swzd7" Jan 26 11:12:08 crc kubenswrapper[4619]: I0126 11:12:08.247163 4619 generic.go:334] "Generic (PLEG): container finished" podID="213a8fd2-1f05-4287-b7f2-dfcd18d94399" containerID="1be2f241afcc7c99d6a50d16e5bdf2fc4f62796cdc929b249b68cf1ef5a4a22b" exitCode=0 Jan 26 11:12:08 crc kubenswrapper[4619]: I0126 11:12:08.247567 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"213a8fd2-1f05-4287-b7f2-dfcd18d94399","Type":"ContainerDied","Data":"1be2f241afcc7c99d6a50d16e5bdf2fc4f62796cdc929b249b68cf1ef5a4a22b"} Jan 26 11:12:08 crc kubenswrapper[4619]: I0126 11:12:08.250558 4619 generic.go:334] "Generic (PLEG): container finished" podID="33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf" containerID="8bf18e3384d5a877bc084315a99d0bd326ced764710cfac705eae40f57cfe4f0" exitCode=0 Jan 26 11:12:08 crc kubenswrapper[4619]: I0126 11:12:08.250595 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf","Type":"ContainerDied","Data":"8bf18e3384d5a877bc084315a99d0bd326ced764710cfac705eae40f57cfe4f0"} Jan 26 11:12:08 crc kubenswrapper[4619]: I0126 11:12:08.444955 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-swzd7"] Jan 26 11:12:09 crc kubenswrapper[4619]: I0126 11:12:09.266776 4619 generic.go:334] "Generic (PLEG): container finished" podID="ee547519-04e8-4ff3-be01-6d7783e3a4d3" containerID="e472c9ecd22b6f89b90410de3ac72a312dff62c05bd744b1790f4c74c60332b3" exitCode=0 Jan 26 11:12:09 crc kubenswrapper[4619]: I0126 11:12:09.269260 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf","Type":"ContainerStarted","Data":"207f1c7a38d9e1cea727a7145e775450b7ed708088f47c26e5efc7b1c74309bd"} Jan 26 11:12:09 crc kubenswrapper[4619]: I0126 11:12:09.269307 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-swzd7" event={"ID":"ee547519-04e8-4ff3-be01-6d7783e3a4d3","Type":"ContainerDied","Data":"e472c9ecd22b6f89b90410de3ac72a312dff62c05bd744b1790f4c74c60332b3"} Jan 26 11:12:09 crc kubenswrapper[4619]: I0126 11:12:09.269323 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-swzd7" event={"ID":"ee547519-04e8-4ff3-be01-6d7783e3a4d3","Type":"ContainerStarted","Data":"51766fe4070dcd6ece52efea4ba56986b367cec77ca3127770bacdd629f19a3f"} Jan 26 11:12:09 crc kubenswrapper[4619]: I0126 11:12:09.269504 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 26 11:12:09 crc kubenswrapper[4619]: I0126 11:12:09.270064 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"213a8fd2-1f05-4287-b7f2-dfcd18d94399","Type":"ContainerStarted","Data":"d71f3849dc2b9df3792a998c97680c403cc3c75bc399475d54373dd9d1c451c8"} Jan 26 11:12:09 crc kubenswrapper[4619]: I0126 11:12:09.270312 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:12:09 crc kubenswrapper[4619]: I0126 11:12:09.304086 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.995086881 podStartE2EDuration="1m5.304071158s" podCreationTimestamp="2026-01-26 11:11:04 +0000 UTC" firstStartedPulling="2026-01-26 11:11:06.884338965 +0000 UTC m=+965.918379681" lastFinishedPulling="2026-01-26 11:11:34.193323242 +0000 UTC m=+993.227363958" observedRunningTime="2026-01-26 11:12:09.301584499 +0000 UTC m=+1028.335625215" watchObservedRunningTime="2026-01-26 11:12:09.304071158 +0000 UTC m=+1028.338111874" Jan 26 11:12:09 crc kubenswrapper[4619]: I0126 11:12:09.330380 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.372971352 podStartE2EDuration="1m4.33036163s" podCreationTimestamp="2026-01-26 11:11:05 +0000 UTC" firstStartedPulling="2026-01-26 11:11:07.304037739 +0000 UTC m=+966.338078455" lastFinishedPulling="2026-01-26 11:11:34.261428017 +0000 UTC m=+993.295468733" observedRunningTime="2026-01-26 11:12:09.322844561 +0000 UTC m=+1028.356885277" watchObservedRunningTime="2026-01-26 11:12:09.33036163 +0000 UTC m=+1028.364402346" Jan 26 11:12:10 crc kubenswrapper[4619]: I0126 11:12:10.802603 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-swzd7" Jan 26 11:12:10 crc kubenswrapper[4619]: I0126 11:12:10.857222 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee547519-04e8-4ff3-be01-6d7783e3a4d3-operator-scripts\") pod \"ee547519-04e8-4ff3-be01-6d7783e3a4d3\" (UID: \"ee547519-04e8-4ff3-be01-6d7783e3a4d3\") " Jan 26 11:12:10 crc kubenswrapper[4619]: I0126 11:12:10.857377 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9g76v\" (UniqueName: \"kubernetes.io/projected/ee547519-04e8-4ff3-be01-6d7783e3a4d3-kube-api-access-9g76v\") pod \"ee547519-04e8-4ff3-be01-6d7783e3a4d3\" (UID: \"ee547519-04e8-4ff3-be01-6d7783e3a4d3\") " Jan 26 11:12:10 crc kubenswrapper[4619]: I0126 11:12:10.863194 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee547519-04e8-4ff3-be01-6d7783e3a4d3-kube-api-access-9g76v" (OuterVolumeSpecName: "kube-api-access-9g76v") pod "ee547519-04e8-4ff3-be01-6d7783e3a4d3" (UID: "ee547519-04e8-4ff3-be01-6d7783e3a4d3"). InnerVolumeSpecName "kube-api-access-9g76v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:12:10 crc kubenswrapper[4619]: I0126 11:12:10.863680 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee547519-04e8-4ff3-be01-6d7783e3a4d3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ee547519-04e8-4ff3-be01-6d7783e3a4d3" (UID: "ee547519-04e8-4ff3-be01-6d7783e3a4d3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:10 crc kubenswrapper[4619]: I0126 11:12:10.959034 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9g76v\" (UniqueName: \"kubernetes.io/projected/ee547519-04e8-4ff3-be01-6d7783e3a4d3-kube-api-access-9g76v\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:10 crc kubenswrapper[4619]: I0126 11:12:10.959060 4619 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee547519-04e8-4ff3-be01-6d7783e3a4d3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:11 crc kubenswrapper[4619]: I0126 11:12:11.316251 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-swzd7" event={"ID":"ee547519-04e8-4ff3-be01-6d7783e3a4d3","Type":"ContainerDied","Data":"51766fe4070dcd6ece52efea4ba56986b367cec77ca3127770bacdd629f19a3f"} Jan 26 11:12:11 crc kubenswrapper[4619]: I0126 11:12:11.316301 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51766fe4070dcd6ece52efea4ba56986b367cec77ca3127770bacdd629f19a3f" Jan 26 11:12:11 crc kubenswrapper[4619]: I0126 11:12:11.319043 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-swzd7" Jan 26 11:12:12 crc kubenswrapper[4619]: I0126 11:12:12.675658 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 26 11:12:14 crc kubenswrapper[4619]: I0126 11:12:14.514405 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e50c002e-11c3-4dc8-b32b-c962da06aecb-etc-swift\") pod \"swift-storage-0\" (UID: \"e50c002e-11c3-4dc8-b32b-c962da06aecb\") " pod="openstack/swift-storage-0" Jan 26 11:12:14 crc kubenswrapper[4619]: I0126 11:12:14.537399 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e50c002e-11c3-4dc8-b32b-c962da06aecb-etc-swift\") pod \"swift-storage-0\" (UID: \"e50c002e-11c3-4dc8-b32b-c962da06aecb\") " pod="openstack/swift-storage-0" Jan 26 11:12:14 crc kubenswrapper[4619]: I0126 11:12:14.615840 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 26 11:12:15 crc kubenswrapper[4619]: I0126 11:12:15.124672 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-djjzm" podUID="b814fe04-5ad5-4a1f-b49b-9f38ea5be2da" containerName="ovn-controller" probeResult="failure" output=< Jan 26 11:12:15 crc kubenswrapper[4619]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 26 11:12:15 crc kubenswrapper[4619]: > Jan 26 11:12:15 crc kubenswrapper[4619]: I0126 11:12:15.200030 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-sq2gq" Jan 26 11:12:15 crc kubenswrapper[4619]: I0126 11:12:15.206089 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-sq2gq" Jan 26 11:12:15 crc kubenswrapper[4619]: I0126 11:12:15.476858 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-djjzm-config-7xc87"] Jan 26 11:12:15 crc kubenswrapper[4619]: E0126 11:12:15.478101 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee547519-04e8-4ff3-be01-6d7783e3a4d3" containerName="mariadb-account-create-update" Jan 26 11:12:15 crc kubenswrapper[4619]: I0126 11:12:15.478123 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee547519-04e8-4ff3-be01-6d7783e3a4d3" containerName="mariadb-account-create-update" Jan 26 11:12:15 crc kubenswrapper[4619]: I0126 11:12:15.478344 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee547519-04e8-4ff3-be01-6d7783e3a4d3" containerName="mariadb-account-create-update" Jan 26 11:12:15 crc kubenswrapper[4619]: I0126 11:12:15.480552 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-djjzm-config-7xc87" Jan 26 11:12:15 crc kubenswrapper[4619]: I0126 11:12:15.484588 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-djjzm-config-7xc87"] Jan 26 11:12:15 crc kubenswrapper[4619]: I0126 11:12:15.486339 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 26 11:12:15 crc kubenswrapper[4619]: I0126 11:12:15.542912 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/347845d3-c093-455e-a1d1-8ab9760f1c53-var-run\") pod \"ovn-controller-djjzm-config-7xc87\" (UID: \"347845d3-c093-455e-a1d1-8ab9760f1c53\") " pod="openstack/ovn-controller-djjzm-config-7xc87" Jan 26 11:12:15 crc kubenswrapper[4619]: I0126 11:12:15.542982 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgvh8\" (UniqueName: \"kubernetes.io/projected/347845d3-c093-455e-a1d1-8ab9760f1c53-kube-api-access-xgvh8\") pod \"ovn-controller-djjzm-config-7xc87\" (UID: \"347845d3-c093-455e-a1d1-8ab9760f1c53\") " pod="openstack/ovn-controller-djjzm-config-7xc87" Jan 26 11:12:15 crc kubenswrapper[4619]: I0126 11:12:15.543180 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/347845d3-c093-455e-a1d1-8ab9760f1c53-additional-scripts\") pod \"ovn-controller-djjzm-config-7xc87\" (UID: \"347845d3-c093-455e-a1d1-8ab9760f1c53\") " pod="openstack/ovn-controller-djjzm-config-7xc87" Jan 26 11:12:15 crc kubenswrapper[4619]: I0126 11:12:15.543220 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/347845d3-c093-455e-a1d1-8ab9760f1c53-scripts\") pod \"ovn-controller-djjzm-config-7xc87\" (UID: \"347845d3-c093-455e-a1d1-8ab9760f1c53\") " pod="openstack/ovn-controller-djjzm-config-7xc87" Jan 26 11:12:15 crc kubenswrapper[4619]: I0126 11:12:15.543317 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/347845d3-c093-455e-a1d1-8ab9760f1c53-var-run-ovn\") pod \"ovn-controller-djjzm-config-7xc87\" (UID: \"347845d3-c093-455e-a1d1-8ab9760f1c53\") " pod="openstack/ovn-controller-djjzm-config-7xc87" Jan 26 11:12:15 crc kubenswrapper[4619]: I0126 11:12:15.543450 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/347845d3-c093-455e-a1d1-8ab9760f1c53-var-log-ovn\") pod \"ovn-controller-djjzm-config-7xc87\" (UID: \"347845d3-c093-455e-a1d1-8ab9760f1c53\") " pod="openstack/ovn-controller-djjzm-config-7xc87" Jan 26 11:12:15 crc kubenswrapper[4619]: I0126 11:12:15.645627 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/347845d3-c093-455e-a1d1-8ab9760f1c53-var-log-ovn\") pod \"ovn-controller-djjzm-config-7xc87\" (UID: \"347845d3-c093-455e-a1d1-8ab9760f1c53\") " pod="openstack/ovn-controller-djjzm-config-7xc87" Jan 26 11:12:15 crc kubenswrapper[4619]: I0126 11:12:15.645692 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/347845d3-c093-455e-a1d1-8ab9760f1c53-var-run\") pod \"ovn-controller-djjzm-config-7xc87\" (UID: \"347845d3-c093-455e-a1d1-8ab9760f1c53\") " pod="openstack/ovn-controller-djjzm-config-7xc87" Jan 26 11:12:15 crc kubenswrapper[4619]: I0126 11:12:15.645730 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgvh8\" (UniqueName: \"kubernetes.io/projected/347845d3-c093-455e-a1d1-8ab9760f1c53-kube-api-access-xgvh8\") pod \"ovn-controller-djjzm-config-7xc87\" (UID: \"347845d3-c093-455e-a1d1-8ab9760f1c53\") " pod="openstack/ovn-controller-djjzm-config-7xc87" Jan 26 11:12:15 crc kubenswrapper[4619]: I0126 11:12:15.645776 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/347845d3-c093-455e-a1d1-8ab9760f1c53-additional-scripts\") pod \"ovn-controller-djjzm-config-7xc87\" (UID: \"347845d3-c093-455e-a1d1-8ab9760f1c53\") " pod="openstack/ovn-controller-djjzm-config-7xc87" Jan 26 11:12:15 crc kubenswrapper[4619]: I0126 11:12:15.645794 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/347845d3-c093-455e-a1d1-8ab9760f1c53-scripts\") pod \"ovn-controller-djjzm-config-7xc87\" (UID: \"347845d3-c093-455e-a1d1-8ab9760f1c53\") " pod="openstack/ovn-controller-djjzm-config-7xc87" Jan 26 11:12:15 crc kubenswrapper[4619]: I0126 11:12:15.645832 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/347845d3-c093-455e-a1d1-8ab9760f1c53-var-run-ovn\") pod \"ovn-controller-djjzm-config-7xc87\" (UID: \"347845d3-c093-455e-a1d1-8ab9760f1c53\") " pod="openstack/ovn-controller-djjzm-config-7xc87" Jan 26 11:12:15 crc kubenswrapper[4619]: I0126 11:12:15.645943 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/347845d3-c093-455e-a1d1-8ab9760f1c53-var-run-ovn\") pod \"ovn-controller-djjzm-config-7xc87\" (UID: \"347845d3-c093-455e-a1d1-8ab9760f1c53\") " pod="openstack/ovn-controller-djjzm-config-7xc87" Jan 26 11:12:15 crc kubenswrapper[4619]: I0126 11:12:15.645979 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/347845d3-c093-455e-a1d1-8ab9760f1c53-var-log-ovn\") pod \"ovn-controller-djjzm-config-7xc87\" (UID: \"347845d3-c093-455e-a1d1-8ab9760f1c53\") " pod="openstack/ovn-controller-djjzm-config-7xc87" Jan 26 11:12:15 crc kubenswrapper[4619]: I0126 11:12:15.646002 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/347845d3-c093-455e-a1d1-8ab9760f1c53-var-run\") pod \"ovn-controller-djjzm-config-7xc87\" (UID: \"347845d3-c093-455e-a1d1-8ab9760f1c53\") " pod="openstack/ovn-controller-djjzm-config-7xc87" Jan 26 11:12:15 crc kubenswrapper[4619]: I0126 11:12:15.646711 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/347845d3-c093-455e-a1d1-8ab9760f1c53-additional-scripts\") pod \"ovn-controller-djjzm-config-7xc87\" (UID: \"347845d3-c093-455e-a1d1-8ab9760f1c53\") " pod="openstack/ovn-controller-djjzm-config-7xc87" Jan 26 11:12:15 crc kubenswrapper[4619]: I0126 11:12:15.647998 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/347845d3-c093-455e-a1d1-8ab9760f1c53-scripts\") pod \"ovn-controller-djjzm-config-7xc87\" (UID: \"347845d3-c093-455e-a1d1-8ab9760f1c53\") " pod="openstack/ovn-controller-djjzm-config-7xc87" Jan 26 11:12:15 crc kubenswrapper[4619]: I0126 11:12:15.692243 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgvh8\" (UniqueName: \"kubernetes.io/projected/347845d3-c093-455e-a1d1-8ab9760f1c53-kube-api-access-xgvh8\") pod \"ovn-controller-djjzm-config-7xc87\" (UID: \"347845d3-c093-455e-a1d1-8ab9760f1c53\") " pod="openstack/ovn-controller-djjzm-config-7xc87" Jan 26 11:12:15 crc kubenswrapper[4619]: I0126 11:12:15.805608 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-djjzm-config-7xc87" Jan 26 11:12:20 crc kubenswrapper[4619]: I0126 11:12:20.119362 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-djjzm" podUID="b814fe04-5ad5-4a1f-b49b-9f38ea5be2da" containerName="ovn-controller" probeResult="failure" output=< Jan 26 11:12:20 crc kubenswrapper[4619]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 26 11:12:20 crc kubenswrapper[4619]: > Jan 26 11:12:22 crc kubenswrapper[4619]: I0126 11:12:22.250069 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-djjzm-config-7xc87"] Jan 26 11:12:22 crc kubenswrapper[4619]: I0126 11:12:22.441155 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 26 11:12:22 crc kubenswrapper[4619]: I0126 11:12:22.449034 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8bqkm" event={"ID":"ad779660-a430-4b5c-91dd-41b9582a4215","Type":"ContainerStarted","Data":"0852227a61d229754e598aba2008de299e2be76bfdb8865031a60afed1b61057"} Jan 26 11:12:22 crc kubenswrapper[4619]: I0126 11:12:22.461007 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-djjzm-config-7xc87" event={"ID":"347845d3-c093-455e-a1d1-8ab9760f1c53","Type":"ContainerStarted","Data":"e118eb4a04a5617790f21db387934ab414bdab24268714e943e5ceb13837b4f3"} Jan 26 11:12:22 crc kubenswrapper[4619]: W0126 11:12:22.465726 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode50c002e_11c3_4dc8_b32b_c962da06aecb.slice/crio-76af1494e14c314e2ad1cc712687628f8481108a548915cb52c6d4a7ce900ef7 WatchSource:0}: Error finding container 76af1494e14c314e2ad1cc712687628f8481108a548915cb52c6d4a7ce900ef7: Status 404 returned error can't find the container with id 76af1494e14c314e2ad1cc712687628f8481108a548915cb52c6d4a7ce900ef7 Jan 26 11:12:22 crc kubenswrapper[4619]: I0126 11:12:22.473740 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-8bqkm" podStartSLOduration=2.420570991 podStartE2EDuration="18.473721409s" podCreationTimestamp="2026-01-26 11:12:04 +0000 UTC" firstStartedPulling="2026-01-26 11:12:05.761434826 +0000 UTC m=+1024.795475542" lastFinishedPulling="2026-01-26 11:12:21.814585234 +0000 UTC m=+1040.848625960" observedRunningTime="2026-01-26 11:12:22.471547299 +0000 UTC m=+1041.505588025" watchObservedRunningTime="2026-01-26 11:12:22.473721409 +0000 UTC m=+1041.507762125" Jan 26 11:12:23 crc kubenswrapper[4619]: I0126 11:12:23.469523 4619 generic.go:334] "Generic (PLEG): container finished" podID="347845d3-c093-455e-a1d1-8ab9760f1c53" containerID="e2746dc4ae64e053c885b747e74e417691d616f5361dbe4b45706f04c337acef" exitCode=0 Jan 26 11:12:23 crc kubenswrapper[4619]: I0126 11:12:23.470253 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-djjzm-config-7xc87" event={"ID":"347845d3-c093-455e-a1d1-8ab9760f1c53","Type":"ContainerDied","Data":"e2746dc4ae64e053c885b747e74e417691d616f5361dbe4b45706f04c337acef"} Jan 26 11:12:23 crc kubenswrapper[4619]: I0126 11:12:23.474441 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e50c002e-11c3-4dc8-b32b-c962da06aecb","Type":"ContainerStarted","Data":"76af1494e14c314e2ad1cc712687628f8481108a548915cb52c6d4a7ce900ef7"} Jan 26 11:12:24 crc kubenswrapper[4619]: I0126 11:12:24.487462 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e50c002e-11c3-4dc8-b32b-c962da06aecb","Type":"ContainerStarted","Data":"73f3f0d7decd4066ab9c36d8e236500f2344f34f46baca31abd59703c4450148"} Jan 26 11:12:24 crc kubenswrapper[4619]: I0126 11:12:24.489329 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e50c002e-11c3-4dc8-b32b-c962da06aecb","Type":"ContainerStarted","Data":"6a3b79eaeda504b756e92a10f332846611133e98df98208147ec258a7665c628"} Jan 26 11:12:24 crc kubenswrapper[4619]: I0126 11:12:24.489403 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e50c002e-11c3-4dc8-b32b-c962da06aecb","Type":"ContainerStarted","Data":"fe1f58f1ba1ecd6b123e4a7bb29c01b22456e202540ca449e4b7284748485470"} Jan 26 11:12:24 crc kubenswrapper[4619]: I0126 11:12:24.489468 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e50c002e-11c3-4dc8-b32b-c962da06aecb","Type":"ContainerStarted","Data":"7b882ab210b286027e0d3fc51d3a687526712f605f50feae4f58cfc7476aed97"} Jan 26 11:12:24 crc kubenswrapper[4619]: I0126 11:12:24.787056 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-djjzm-config-7xc87" Jan 26 11:12:24 crc kubenswrapper[4619]: I0126 11:12:24.925252 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/347845d3-c093-455e-a1d1-8ab9760f1c53-additional-scripts\") pod \"347845d3-c093-455e-a1d1-8ab9760f1c53\" (UID: \"347845d3-c093-455e-a1d1-8ab9760f1c53\") " Jan 26 11:12:24 crc kubenswrapper[4619]: I0126 11:12:24.925570 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/347845d3-c093-455e-a1d1-8ab9760f1c53-var-run\") pod \"347845d3-c093-455e-a1d1-8ab9760f1c53\" (UID: \"347845d3-c093-455e-a1d1-8ab9760f1c53\") " Jan 26 11:12:24 crc kubenswrapper[4619]: I0126 11:12:24.925686 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/347845d3-c093-455e-a1d1-8ab9760f1c53-var-run" (OuterVolumeSpecName: "var-run") pod "347845d3-c093-455e-a1d1-8ab9760f1c53" (UID: "347845d3-c093-455e-a1d1-8ab9760f1c53"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:12:24 crc kubenswrapper[4619]: I0126 11:12:24.925760 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/347845d3-c093-455e-a1d1-8ab9760f1c53-scripts\") pod \"347845d3-c093-455e-a1d1-8ab9760f1c53\" (UID: \"347845d3-c093-455e-a1d1-8ab9760f1c53\") " Jan 26 11:12:24 crc kubenswrapper[4619]: I0126 11:12:24.925873 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgvh8\" (UniqueName: \"kubernetes.io/projected/347845d3-c093-455e-a1d1-8ab9760f1c53-kube-api-access-xgvh8\") pod \"347845d3-c093-455e-a1d1-8ab9760f1c53\" (UID: \"347845d3-c093-455e-a1d1-8ab9760f1c53\") " Jan 26 11:12:24 crc kubenswrapper[4619]: I0126 11:12:24.925909 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/347845d3-c093-455e-a1d1-8ab9760f1c53-var-log-ovn\") pod \"347845d3-c093-455e-a1d1-8ab9760f1c53\" (UID: \"347845d3-c093-455e-a1d1-8ab9760f1c53\") " Jan 26 11:12:24 crc kubenswrapper[4619]: I0126 11:12:24.925956 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/347845d3-c093-455e-a1d1-8ab9760f1c53-var-run-ovn\") pod \"347845d3-c093-455e-a1d1-8ab9760f1c53\" (UID: \"347845d3-c093-455e-a1d1-8ab9760f1c53\") " Jan 26 11:12:24 crc kubenswrapper[4619]: I0126 11:12:24.926076 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/347845d3-c093-455e-a1d1-8ab9760f1c53-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "347845d3-c093-455e-a1d1-8ab9760f1c53" (UID: "347845d3-c093-455e-a1d1-8ab9760f1c53"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:12:24 crc kubenswrapper[4619]: I0126 11:12:24.926174 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/347845d3-c093-455e-a1d1-8ab9760f1c53-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "347845d3-c093-455e-a1d1-8ab9760f1c53" (UID: "347845d3-c093-455e-a1d1-8ab9760f1c53"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:12:24 crc kubenswrapper[4619]: I0126 11:12:24.926476 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/347845d3-c093-455e-a1d1-8ab9760f1c53-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "347845d3-c093-455e-a1d1-8ab9760f1c53" (UID: "347845d3-c093-455e-a1d1-8ab9760f1c53"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:24 crc kubenswrapper[4619]: I0126 11:12:24.926576 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/347845d3-c093-455e-a1d1-8ab9760f1c53-scripts" (OuterVolumeSpecName: "scripts") pod "347845d3-c093-455e-a1d1-8ab9760f1c53" (UID: "347845d3-c093-455e-a1d1-8ab9760f1c53"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:24 crc kubenswrapper[4619]: I0126 11:12:24.926869 4619 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/347845d3-c093-455e-a1d1-8ab9760f1c53-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:24 crc kubenswrapper[4619]: I0126 11:12:24.926904 4619 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/347845d3-c093-455e-a1d1-8ab9760f1c53-var-run\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:24 crc kubenswrapper[4619]: I0126 11:12:24.926922 4619 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/347845d3-c093-455e-a1d1-8ab9760f1c53-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:24 crc kubenswrapper[4619]: I0126 11:12:24.926939 4619 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/347845d3-c093-455e-a1d1-8ab9760f1c53-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:24 crc kubenswrapper[4619]: I0126 11:12:24.926956 4619 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/347845d3-c093-455e-a1d1-8ab9760f1c53-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:24 crc kubenswrapper[4619]: I0126 11:12:24.936680 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/347845d3-c093-455e-a1d1-8ab9760f1c53-kube-api-access-xgvh8" (OuterVolumeSpecName: "kube-api-access-xgvh8") pod "347845d3-c093-455e-a1d1-8ab9760f1c53" (UID: "347845d3-c093-455e-a1d1-8ab9760f1c53"). InnerVolumeSpecName "kube-api-access-xgvh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:12:25 crc kubenswrapper[4619]: I0126 11:12:25.028409 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgvh8\" (UniqueName: \"kubernetes.io/projected/347845d3-c093-455e-a1d1-8ab9760f1c53-kube-api-access-xgvh8\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:25 crc kubenswrapper[4619]: I0126 11:12:25.126692 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-djjzm" Jan 26 11:12:25 crc kubenswrapper[4619]: I0126 11:12:25.498596 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-djjzm-config-7xc87" event={"ID":"347845d3-c093-455e-a1d1-8ab9760f1c53","Type":"ContainerDied","Data":"e118eb4a04a5617790f21db387934ab414bdab24268714e943e5ceb13837b4f3"} Jan 26 11:12:25 crc kubenswrapper[4619]: I0126 11:12:25.498660 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e118eb4a04a5617790f21db387934ab414bdab24268714e943e5ceb13837b4f3" Jan 26 11:12:25 crc kubenswrapper[4619]: I0126 11:12:25.498725 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-djjzm-config-7xc87" Jan 26 11:12:25 crc kubenswrapper[4619]: I0126 11:12:25.892566 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-djjzm-config-7xc87"] Jan 26 11:12:25 crc kubenswrapper[4619]: I0126 11:12:25.900372 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-djjzm-config-7xc87"] Jan 26 11:12:25 crc kubenswrapper[4619]: I0126 11:12:25.991362 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-djjzm-config-pmjg8"] Jan 26 11:12:25 crc kubenswrapper[4619]: E0126 11:12:25.995227 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="347845d3-c093-455e-a1d1-8ab9760f1c53" containerName="ovn-config" Jan 26 11:12:25 crc kubenswrapper[4619]: I0126 11:12:25.995261 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="347845d3-c093-455e-a1d1-8ab9760f1c53" containerName="ovn-config" Jan 26 11:12:25 crc kubenswrapper[4619]: I0126 11:12:25.995471 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="347845d3-c093-455e-a1d1-8ab9760f1c53" containerName="ovn-config" Jan 26 11:12:25 crc kubenswrapper[4619]: I0126 11:12:25.996040 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-djjzm-config-pmjg8" Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.003116 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.010121 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-djjzm-config-pmjg8"] Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.145797 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4sx9\" (UniqueName: \"kubernetes.io/projected/02dbcbe8-5527-40eb-b9a8-01c188281b3b-kube-api-access-h4sx9\") pod \"ovn-controller-djjzm-config-pmjg8\" (UID: \"02dbcbe8-5527-40eb-b9a8-01c188281b3b\") " pod="openstack/ovn-controller-djjzm-config-pmjg8" Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.145852 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/02dbcbe8-5527-40eb-b9a8-01c188281b3b-additional-scripts\") pod \"ovn-controller-djjzm-config-pmjg8\" (UID: \"02dbcbe8-5527-40eb-b9a8-01c188281b3b\") " pod="openstack/ovn-controller-djjzm-config-pmjg8" Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.145884 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/02dbcbe8-5527-40eb-b9a8-01c188281b3b-var-run\") pod \"ovn-controller-djjzm-config-pmjg8\" (UID: \"02dbcbe8-5527-40eb-b9a8-01c188281b3b\") " pod="openstack/ovn-controller-djjzm-config-pmjg8" Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.145904 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/02dbcbe8-5527-40eb-b9a8-01c188281b3b-var-log-ovn\") pod \"ovn-controller-djjzm-config-pmjg8\" (UID: \"02dbcbe8-5527-40eb-b9a8-01c188281b3b\") " pod="openstack/ovn-controller-djjzm-config-pmjg8" Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.145965 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02dbcbe8-5527-40eb-b9a8-01c188281b3b-scripts\") pod \"ovn-controller-djjzm-config-pmjg8\" (UID: \"02dbcbe8-5527-40eb-b9a8-01c188281b3b\") " pod="openstack/ovn-controller-djjzm-config-pmjg8" Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.146329 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/02dbcbe8-5527-40eb-b9a8-01c188281b3b-var-run-ovn\") pod \"ovn-controller-djjzm-config-pmjg8\" (UID: \"02dbcbe8-5527-40eb-b9a8-01c188281b3b\") " pod="openstack/ovn-controller-djjzm-config-pmjg8" Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.248077 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/02dbcbe8-5527-40eb-b9a8-01c188281b3b-var-run-ovn\") pod \"ovn-controller-djjzm-config-pmjg8\" (UID: \"02dbcbe8-5527-40eb-b9a8-01c188281b3b\") " pod="openstack/ovn-controller-djjzm-config-pmjg8" Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.248310 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4sx9\" (UniqueName: \"kubernetes.io/projected/02dbcbe8-5527-40eb-b9a8-01c188281b3b-kube-api-access-h4sx9\") pod \"ovn-controller-djjzm-config-pmjg8\" (UID: \"02dbcbe8-5527-40eb-b9a8-01c188281b3b\") " pod="openstack/ovn-controller-djjzm-config-pmjg8" Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.248343 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/02dbcbe8-5527-40eb-b9a8-01c188281b3b-additional-scripts\") pod \"ovn-controller-djjzm-config-pmjg8\" (UID: \"02dbcbe8-5527-40eb-b9a8-01c188281b3b\") " pod="openstack/ovn-controller-djjzm-config-pmjg8" Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.248366 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/02dbcbe8-5527-40eb-b9a8-01c188281b3b-var-run\") pod \"ovn-controller-djjzm-config-pmjg8\" (UID: \"02dbcbe8-5527-40eb-b9a8-01c188281b3b\") " pod="openstack/ovn-controller-djjzm-config-pmjg8" Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.248380 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/02dbcbe8-5527-40eb-b9a8-01c188281b3b-var-log-ovn\") pod \"ovn-controller-djjzm-config-pmjg8\" (UID: \"02dbcbe8-5527-40eb-b9a8-01c188281b3b\") " pod="openstack/ovn-controller-djjzm-config-pmjg8" Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.248389 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/02dbcbe8-5527-40eb-b9a8-01c188281b3b-var-run-ovn\") pod \"ovn-controller-djjzm-config-pmjg8\" (UID: \"02dbcbe8-5527-40eb-b9a8-01c188281b3b\") " pod="openstack/ovn-controller-djjzm-config-pmjg8" Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.248440 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02dbcbe8-5527-40eb-b9a8-01c188281b3b-scripts\") pod \"ovn-controller-djjzm-config-pmjg8\" (UID: \"02dbcbe8-5527-40eb-b9a8-01c188281b3b\") " pod="openstack/ovn-controller-djjzm-config-pmjg8" Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.248986 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/02dbcbe8-5527-40eb-b9a8-01c188281b3b-additional-scripts\") pod \"ovn-controller-djjzm-config-pmjg8\" (UID: \"02dbcbe8-5527-40eb-b9a8-01c188281b3b\") " pod="openstack/ovn-controller-djjzm-config-pmjg8" Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.249275 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/02dbcbe8-5527-40eb-b9a8-01c188281b3b-var-run\") pod \"ovn-controller-djjzm-config-pmjg8\" (UID: \"02dbcbe8-5527-40eb-b9a8-01c188281b3b\") " pod="openstack/ovn-controller-djjzm-config-pmjg8" Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.249319 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/02dbcbe8-5527-40eb-b9a8-01c188281b3b-var-log-ovn\") pod \"ovn-controller-djjzm-config-pmjg8\" (UID: \"02dbcbe8-5527-40eb-b9a8-01c188281b3b\") " pod="openstack/ovn-controller-djjzm-config-pmjg8" Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.250131 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02dbcbe8-5527-40eb-b9a8-01c188281b3b-scripts\") pod \"ovn-controller-djjzm-config-pmjg8\" (UID: \"02dbcbe8-5527-40eb-b9a8-01c188281b3b\") " pod="openstack/ovn-controller-djjzm-config-pmjg8" Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.269027 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4sx9\" (UniqueName: \"kubernetes.io/projected/02dbcbe8-5527-40eb-b9a8-01c188281b3b-kube-api-access-h4sx9\") pod \"ovn-controller-djjzm-config-pmjg8\" (UID: \"02dbcbe8-5527-40eb-b9a8-01c188281b3b\") " pod="openstack/ovn-controller-djjzm-config-pmjg8" Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.311795 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.376419 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-djjzm-config-pmjg8" Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.545147 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e50c002e-11c3-4dc8-b32b-c962da06aecb","Type":"ContainerStarted","Data":"fb7511b9c18adeed9cfc897d7d03e09b604fce2837188fe73cffbd3aea21a6f8"} Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.545207 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e50c002e-11c3-4dc8-b32b-c962da06aecb","Type":"ContainerStarted","Data":"2c245fa02fab31c360ad53a5f5b0a7a9e975b7afb80a91b0de7734f8aaab0e31"} Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.545216 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e50c002e-11c3-4dc8-b32b-c962da06aecb","Type":"ContainerStarted","Data":"cd7170ca563d35459298b0269df6574b1ac814c5bb2d596e19d6899fe53f8fb7"} Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.545225 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e50c002e-11c3-4dc8-b32b-c962da06aecb","Type":"ContainerStarted","Data":"11910e5ab686413c55bff2eac48c6bc03a5beef0c298b4a637b2be9bb799485e"} Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.675983 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.872177 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-l66xq"] Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.873189 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-l66xq" Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.917712 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-d078-account-create-update-vhccn"] Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.918918 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d078-account-create-update-vhccn" Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.921737 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.937435 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d078-account-create-update-vhccn"] Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.962681 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-l66xq"] Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.967681 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92xqp\" (UniqueName: \"kubernetes.io/projected/0a7a9429-31bf-450f-8bbc-5732f2073487-kube-api-access-92xqp\") pod \"barbican-d078-account-create-update-vhccn\" (UID: \"0a7a9429-31bf-450f-8bbc-5732f2073487\") " pod="openstack/barbican-d078-account-create-update-vhccn" Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.967785 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1efc4fce-ab7b-4156-aabe-47611db76dc4-operator-scripts\") pod \"barbican-db-create-l66xq\" (UID: \"1efc4fce-ab7b-4156-aabe-47611db76dc4\") " pod="openstack/barbican-db-create-l66xq" Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.967835 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnmj4\" (UniqueName: \"kubernetes.io/projected/1efc4fce-ab7b-4156-aabe-47611db76dc4-kube-api-access-fnmj4\") pod \"barbican-db-create-l66xq\" (UID: \"1efc4fce-ab7b-4156-aabe-47611db76dc4\") " pod="openstack/barbican-db-create-l66xq" Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.967871 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a7a9429-31bf-450f-8bbc-5732f2073487-operator-scripts\") pod \"barbican-d078-account-create-update-vhccn\" (UID: \"0a7a9429-31bf-450f-8bbc-5732f2073487\") " pod="openstack/barbican-d078-account-create-update-vhccn" Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.984551 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-9qh4x"] Jan 26 11:12:26 crc kubenswrapper[4619]: I0126 11:12:26.985656 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9qh4x" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.014380 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-9qh4x"] Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.054944 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-djjzm-config-pmjg8"] Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.069604 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92xqp\" (UniqueName: \"kubernetes.io/projected/0a7a9429-31bf-450f-8bbc-5732f2073487-kube-api-access-92xqp\") pod \"barbican-d078-account-create-update-vhccn\" (UID: \"0a7a9429-31bf-450f-8bbc-5732f2073487\") " pod="openstack/barbican-d078-account-create-update-vhccn" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.069697 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1efc4fce-ab7b-4156-aabe-47611db76dc4-operator-scripts\") pod \"barbican-db-create-l66xq\" (UID: \"1efc4fce-ab7b-4156-aabe-47611db76dc4\") " pod="openstack/barbican-db-create-l66xq" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.069740 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb7vb\" (UniqueName: \"kubernetes.io/projected/a025664f-5072-4d63-af0f-d1d92e019292-kube-api-access-nb7vb\") pod \"cinder-db-create-9qh4x\" (UID: \"a025664f-5072-4d63-af0f-d1d92e019292\") " pod="openstack/cinder-db-create-9qh4x" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.069765 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnmj4\" (UniqueName: \"kubernetes.io/projected/1efc4fce-ab7b-4156-aabe-47611db76dc4-kube-api-access-fnmj4\") pod \"barbican-db-create-l66xq\" (UID: \"1efc4fce-ab7b-4156-aabe-47611db76dc4\") " pod="openstack/barbican-db-create-l66xq" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.069790 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a025664f-5072-4d63-af0f-d1d92e019292-operator-scripts\") pod \"cinder-db-create-9qh4x\" (UID: \"a025664f-5072-4d63-af0f-d1d92e019292\") " pod="openstack/cinder-db-create-9qh4x" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.069817 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a7a9429-31bf-450f-8bbc-5732f2073487-operator-scripts\") pod \"barbican-d078-account-create-update-vhccn\" (UID: \"0a7a9429-31bf-450f-8bbc-5732f2073487\") " pod="openstack/barbican-d078-account-create-update-vhccn" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.070527 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a7a9429-31bf-450f-8bbc-5732f2073487-operator-scripts\") pod \"barbican-d078-account-create-update-vhccn\" (UID: \"0a7a9429-31bf-450f-8bbc-5732f2073487\") " pod="openstack/barbican-d078-account-create-update-vhccn" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.071549 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1efc4fce-ab7b-4156-aabe-47611db76dc4-operator-scripts\") pod \"barbican-db-create-l66xq\" (UID: \"1efc4fce-ab7b-4156-aabe-47611db76dc4\") " pod="openstack/barbican-db-create-l66xq" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.098927 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnmj4\" (UniqueName: \"kubernetes.io/projected/1efc4fce-ab7b-4156-aabe-47611db76dc4-kube-api-access-fnmj4\") pod \"barbican-db-create-l66xq\" (UID: \"1efc4fce-ab7b-4156-aabe-47611db76dc4\") " pod="openstack/barbican-db-create-l66xq" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.103109 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92xqp\" (UniqueName: \"kubernetes.io/projected/0a7a9429-31bf-450f-8bbc-5732f2073487-kube-api-access-92xqp\") pod \"barbican-d078-account-create-update-vhccn\" (UID: \"0a7a9429-31bf-450f-8bbc-5732f2073487\") " pod="openstack/barbican-d078-account-create-update-vhccn" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.167944 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-bf9c-account-create-update-r7s76"] Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.174759 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb7vb\" (UniqueName: \"kubernetes.io/projected/a025664f-5072-4d63-af0f-d1d92e019292-kube-api-access-nb7vb\") pod \"cinder-db-create-9qh4x\" (UID: \"a025664f-5072-4d63-af0f-d1d92e019292\") " pod="openstack/cinder-db-create-9qh4x" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.174808 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a025664f-5072-4d63-af0f-d1d92e019292-operator-scripts\") pod \"cinder-db-create-9qh4x\" (UID: \"a025664f-5072-4d63-af0f-d1d92e019292\") " pod="openstack/cinder-db-create-9qh4x" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.175473 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a025664f-5072-4d63-af0f-d1d92e019292-operator-scripts\") pod \"cinder-db-create-9qh4x\" (UID: \"a025664f-5072-4d63-af0f-d1d92e019292\") " pod="openstack/cinder-db-create-9qh4x" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.177544 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-bf9c-account-create-update-r7s76" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.179130 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-bf9c-account-create-update-r7s76"] Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.180913 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.217590 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-l66xq" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.238504 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb7vb\" (UniqueName: \"kubernetes.io/projected/a025664f-5072-4d63-af0f-d1d92e019292-kube-api-access-nb7vb\") pod \"cinder-db-create-9qh4x\" (UID: \"a025664f-5072-4d63-af0f-d1d92e019292\") " pod="openstack/cinder-db-create-9qh4x" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.248306 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-2n6m9"] Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.249298 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2n6m9" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.281918 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d078-account-create-update-vhccn" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.286892 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d96e2aac-889c-48f2-834b-2795a77c10d2-operator-scripts\") pod \"neutron-db-create-2n6m9\" (UID: \"d96e2aac-889c-48f2-834b-2795a77c10d2\") " pod="openstack/neutron-db-create-2n6m9" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.286939 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnbqp\" (UniqueName: \"kubernetes.io/projected/d96e2aac-889c-48f2-834b-2795a77c10d2-kube-api-access-bnbqp\") pod \"neutron-db-create-2n6m9\" (UID: \"d96e2aac-889c-48f2-834b-2795a77c10d2\") " pod="openstack/neutron-db-create-2n6m9" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.286993 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99-operator-scripts\") pod \"cinder-bf9c-account-create-update-r7s76\" (UID: \"74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99\") " pod="openstack/cinder-bf9c-account-create-update-r7s76" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.287028 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqzmr\" (UniqueName: \"kubernetes.io/projected/74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99-kube-api-access-nqzmr\") pod \"cinder-bf9c-account-create-update-r7s76\" (UID: \"74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99\") " pod="openstack/cinder-bf9c-account-create-update-r7s76" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.303227 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="347845d3-c093-455e-a1d1-8ab9760f1c53" path="/var/lib/kubelet/pods/347845d3-c093-455e-a1d1-8ab9760f1c53/volumes" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.307746 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-74bc6"] Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.308634 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-74bc6" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.315746 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.315998 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2hx4x" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.316108 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.317132 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9qh4x" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.321711 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.342658 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-2n6m9"] Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.391408 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d96e2aac-889c-48f2-834b-2795a77c10d2-operator-scripts\") pod \"neutron-db-create-2n6m9\" (UID: \"d96e2aac-889c-48f2-834b-2795a77c10d2\") " pod="openstack/neutron-db-create-2n6m9" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.391764 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnbqp\" (UniqueName: \"kubernetes.io/projected/d96e2aac-889c-48f2-834b-2795a77c10d2-kube-api-access-bnbqp\") pod \"neutron-db-create-2n6m9\" (UID: \"d96e2aac-889c-48f2-834b-2795a77c10d2\") " pod="openstack/neutron-db-create-2n6m9" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.391882 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99-operator-scripts\") pod \"cinder-bf9c-account-create-update-r7s76\" (UID: \"74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99\") " pod="openstack/cinder-bf9c-account-create-update-r7s76" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.391982 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqzmr\" (UniqueName: \"kubernetes.io/projected/74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99-kube-api-access-nqzmr\") pod \"cinder-bf9c-account-create-update-r7s76\" (UID: \"74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99\") " pod="openstack/cinder-bf9c-account-create-update-r7s76" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.392079 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8fa96fb-5f34-4f6c-932b-14420024f02d-combined-ca-bundle\") pod \"keystone-db-sync-74bc6\" (UID: \"d8fa96fb-5f34-4f6c-932b-14420024f02d\") " pod="openstack/keystone-db-sync-74bc6" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.392183 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68wtb\" (UniqueName: \"kubernetes.io/projected/d8fa96fb-5f34-4f6c-932b-14420024f02d-kube-api-access-68wtb\") pod \"keystone-db-sync-74bc6\" (UID: \"d8fa96fb-5f34-4f6c-932b-14420024f02d\") " pod="openstack/keystone-db-sync-74bc6" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.392336 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8fa96fb-5f34-4f6c-932b-14420024f02d-config-data\") pod \"keystone-db-sync-74bc6\" (UID: \"d8fa96fb-5f34-4f6c-932b-14420024f02d\") " pod="openstack/keystone-db-sync-74bc6" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.393209 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d96e2aac-889c-48f2-834b-2795a77c10d2-operator-scripts\") pod \"neutron-db-create-2n6m9\" (UID: \"d96e2aac-889c-48f2-834b-2795a77c10d2\") " pod="openstack/neutron-db-create-2n6m9" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.394819 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99-operator-scripts\") pod \"cinder-bf9c-account-create-update-r7s76\" (UID: \"74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99\") " pod="openstack/cinder-bf9c-account-create-update-r7s76" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.395660 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-74bc6"] Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.422156 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqzmr\" (UniqueName: \"kubernetes.io/projected/74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99-kube-api-access-nqzmr\") pod \"cinder-bf9c-account-create-update-r7s76\" (UID: \"74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99\") " pod="openstack/cinder-bf9c-account-create-update-r7s76" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.430032 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnbqp\" (UniqueName: \"kubernetes.io/projected/d96e2aac-889c-48f2-834b-2795a77c10d2-kube-api-access-bnbqp\") pod \"neutron-db-create-2n6m9\" (UID: \"d96e2aac-889c-48f2-834b-2795a77c10d2\") " pod="openstack/neutron-db-create-2n6m9" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.457210 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-0751-account-create-update-6ddwf"] Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.458606 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0751-account-create-update-6ddwf" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.464110 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.493330 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8fa96fb-5f34-4f6c-932b-14420024f02d-combined-ca-bundle\") pod \"keystone-db-sync-74bc6\" (UID: \"d8fa96fb-5f34-4f6c-932b-14420024f02d\") " pod="openstack/keystone-db-sync-74bc6" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.493378 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68wtb\" (UniqueName: \"kubernetes.io/projected/d8fa96fb-5f34-4f6c-932b-14420024f02d-kube-api-access-68wtb\") pod \"keystone-db-sync-74bc6\" (UID: \"d8fa96fb-5f34-4f6c-932b-14420024f02d\") " pod="openstack/keystone-db-sync-74bc6" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.493437 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/911c4b43-4424-4fcf-91ca-d1c4263dd12d-operator-scripts\") pod \"neutron-0751-account-create-update-6ddwf\" (UID: \"911c4b43-4424-4fcf-91ca-d1c4263dd12d\") " pod="openstack/neutron-0751-account-create-update-6ddwf" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.493457 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqbhj\" (UniqueName: \"kubernetes.io/projected/911c4b43-4424-4fcf-91ca-d1c4263dd12d-kube-api-access-vqbhj\") pod \"neutron-0751-account-create-update-6ddwf\" (UID: \"911c4b43-4424-4fcf-91ca-d1c4263dd12d\") " pod="openstack/neutron-0751-account-create-update-6ddwf" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.493480 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8fa96fb-5f34-4f6c-932b-14420024f02d-config-data\") pod \"keystone-db-sync-74bc6\" (UID: \"d8fa96fb-5f34-4f6c-932b-14420024f02d\") " pod="openstack/keystone-db-sync-74bc6" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.499486 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0751-account-create-update-6ddwf"] Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.500036 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-bf9c-account-create-update-r7s76" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.507841 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8fa96fb-5f34-4f6c-932b-14420024f02d-combined-ca-bundle\") pod \"keystone-db-sync-74bc6\" (UID: \"d8fa96fb-5f34-4f6c-932b-14420024f02d\") " pod="openstack/keystone-db-sync-74bc6" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.509340 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8fa96fb-5f34-4f6c-932b-14420024f02d-config-data\") pod \"keystone-db-sync-74bc6\" (UID: \"d8fa96fb-5f34-4f6c-932b-14420024f02d\") " pod="openstack/keystone-db-sync-74bc6" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.530749 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68wtb\" (UniqueName: \"kubernetes.io/projected/d8fa96fb-5f34-4f6c-932b-14420024f02d-kube-api-access-68wtb\") pod \"keystone-db-sync-74bc6\" (UID: \"d8fa96fb-5f34-4f6c-932b-14420024f02d\") " pod="openstack/keystone-db-sync-74bc6" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.556828 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-djjzm-config-pmjg8" event={"ID":"02dbcbe8-5527-40eb-b9a8-01c188281b3b","Type":"ContainerStarted","Data":"3e988cdce6b0360369590043c01e0dfef6f1125d37e77ae82d6f744978cbb0aa"} Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.598087 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/911c4b43-4424-4fcf-91ca-d1c4263dd12d-operator-scripts\") pod \"neutron-0751-account-create-update-6ddwf\" (UID: \"911c4b43-4424-4fcf-91ca-d1c4263dd12d\") " pod="openstack/neutron-0751-account-create-update-6ddwf" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.598129 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqbhj\" (UniqueName: \"kubernetes.io/projected/911c4b43-4424-4fcf-91ca-d1c4263dd12d-kube-api-access-vqbhj\") pod \"neutron-0751-account-create-update-6ddwf\" (UID: \"911c4b43-4424-4fcf-91ca-d1c4263dd12d\") " pod="openstack/neutron-0751-account-create-update-6ddwf" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.602594 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/911c4b43-4424-4fcf-91ca-d1c4263dd12d-operator-scripts\") pod \"neutron-0751-account-create-update-6ddwf\" (UID: \"911c4b43-4424-4fcf-91ca-d1c4263dd12d\") " pod="openstack/neutron-0751-account-create-update-6ddwf" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.629537 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2n6m9" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.642641 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqbhj\" (UniqueName: \"kubernetes.io/projected/911c4b43-4424-4fcf-91ca-d1c4263dd12d-kube-api-access-vqbhj\") pod \"neutron-0751-account-create-update-6ddwf\" (UID: \"911c4b43-4424-4fcf-91ca-d1c4263dd12d\") " pod="openstack/neutron-0751-account-create-update-6ddwf" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.643053 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-74bc6" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.874496 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0751-account-create-update-6ddwf" Jan 26 11:12:27 crc kubenswrapper[4619]: I0126 11:12:27.901498 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-l66xq"] Jan 26 11:12:27 crc kubenswrapper[4619]: W0126 11:12:27.946295 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1efc4fce_ab7b_4156_aabe_47611db76dc4.slice/crio-50926a9ad432a53c2aa0ee59a3b0c9da51e1c0454cbafccc48f12adc45f9508d WatchSource:0}: Error finding container 50926a9ad432a53c2aa0ee59a3b0c9da51e1c0454cbafccc48f12adc45f9508d: Status 404 returned error can't find the container with id 50926a9ad432a53c2aa0ee59a3b0c9da51e1c0454cbafccc48f12adc45f9508d Jan 26 11:12:28 crc kubenswrapper[4619]: I0126 11:12:28.017537 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d078-account-create-update-vhccn"] Jan 26 11:12:28 crc kubenswrapper[4619]: I0126 11:12:28.501971 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-bf9c-account-create-update-r7s76"] Jan 26 11:12:28 crc kubenswrapper[4619]: I0126 11:12:28.510754 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-2n6m9"] Jan 26 11:12:28 crc kubenswrapper[4619]: I0126 11:12:28.537015 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-74bc6"] Jan 26 11:12:28 crc kubenswrapper[4619]: I0126 11:12:28.566453 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-djjzm-config-pmjg8" event={"ID":"02dbcbe8-5527-40eb-b9a8-01c188281b3b","Type":"ContainerStarted","Data":"53e0439f074a8027768c23f947352e1e64902410b671bdc635853612ce33cead"} Jan 26 11:12:28 crc kubenswrapper[4619]: I0126 11:12:28.569544 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d078-account-create-update-vhccn" event={"ID":"0a7a9429-31bf-450f-8bbc-5732f2073487","Type":"ContainerStarted","Data":"f6bea69ee45a26ee23ec1e0d4dc318e40aa1c6feca121f1040eb43bb5b01a077"} Jan 26 11:12:28 crc kubenswrapper[4619]: I0126 11:12:28.569566 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d078-account-create-update-vhccn" event={"ID":"0a7a9429-31bf-450f-8bbc-5732f2073487","Type":"ContainerStarted","Data":"75a3cda72023d51484ddceb2ce7dc668a403f762a4ef9db3fb1a24cafe1b7b12"} Jan 26 11:12:28 crc kubenswrapper[4619]: I0126 11:12:28.571314 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-bf9c-account-create-update-r7s76" event={"ID":"74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99","Type":"ContainerStarted","Data":"14cb15ff499fc642040c39e7dd3f861f18d3cc098b7f39bf7f41e8c685c1140a"} Jan 26 11:12:28 crc kubenswrapper[4619]: I0126 11:12:28.572582 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-l66xq" event={"ID":"1efc4fce-ab7b-4156-aabe-47611db76dc4","Type":"ContainerStarted","Data":"8960d0b808e54b3d3dcdbaab7921049adf220a66b3cc982d9e72064205a11648"} Jan 26 11:12:28 crc kubenswrapper[4619]: I0126 11:12:28.572600 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-l66xq" event={"ID":"1efc4fce-ab7b-4156-aabe-47611db76dc4","Type":"ContainerStarted","Data":"50926a9ad432a53c2aa0ee59a3b0c9da51e1c0454cbafccc48f12adc45f9508d"} Jan 26 11:12:28 crc kubenswrapper[4619]: I0126 11:12:28.574299 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2n6m9" event={"ID":"d96e2aac-889c-48f2-834b-2795a77c10d2","Type":"ContainerStarted","Data":"86b5c18e2628b6537cb0339834b5b4eebe61080bb1c773dff5d812c02cca4884"} Jan 26 11:12:28 crc kubenswrapper[4619]: I0126 11:12:28.575215 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-74bc6" event={"ID":"d8fa96fb-5f34-4f6c-932b-14420024f02d","Type":"ContainerStarted","Data":"d550b54268d14f1d8a229421029a9b9f32dfa39e1c86d4011d50aef979cd5965"} Jan 26 11:12:28 crc kubenswrapper[4619]: I0126 11:12:28.592708 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-djjzm-config-pmjg8" podStartSLOduration=3.592691969 podStartE2EDuration="3.592691969s" podCreationTimestamp="2026-01-26 11:12:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:12:28.589900301 +0000 UTC m=+1047.623941017" watchObservedRunningTime="2026-01-26 11:12:28.592691969 +0000 UTC m=+1047.626732675" Jan 26 11:12:28 crc kubenswrapper[4619]: W0126 11:12:28.623139 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod911c4b43_4424_4fcf_91ca_d1c4263dd12d.slice/crio-5f53c92da4e8228d0bda8cb351831af93c6578a5f45aacc80d16393f2cbb785b WatchSource:0}: Error finding container 5f53c92da4e8228d0bda8cb351831af93c6578a5f45aacc80d16393f2cbb785b: Status 404 returned error can't find the container with id 5f53c92da4e8228d0bda8cb351831af93c6578a5f45aacc80d16393f2cbb785b Jan 26 11:12:28 crc kubenswrapper[4619]: I0126 11:12:28.624821 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0751-account-create-update-6ddwf"] Jan 26 11:12:28 crc kubenswrapper[4619]: I0126 11:12:28.627007 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-l66xq" podStartSLOduration=2.626985164 podStartE2EDuration="2.626985164s" podCreationTimestamp="2026-01-26 11:12:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:12:28.620077542 +0000 UTC m=+1047.654118278" watchObservedRunningTime="2026-01-26 11:12:28.626985164 +0000 UTC m=+1047.661025870" Jan 26 11:12:28 crc kubenswrapper[4619]: I0126 11:12:28.716737 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-9qh4x"] Jan 26 11:12:29 crc kubenswrapper[4619]: I0126 11:12:29.671072 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-bf9c-account-create-update-r7s76" event={"ID":"74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99","Type":"ContainerStarted","Data":"a858cedeadd66a6f0470c5e2e7365be5d50df4d22b4a182bc51714ed8e7b701c"} Jan 26 11:12:29 crc kubenswrapper[4619]: I0126 11:12:29.675702 4619 generic.go:334] "Generic (PLEG): container finished" podID="1efc4fce-ab7b-4156-aabe-47611db76dc4" containerID="8960d0b808e54b3d3dcdbaab7921049adf220a66b3cc982d9e72064205a11648" exitCode=0 Jan 26 11:12:29 crc kubenswrapper[4619]: I0126 11:12:29.675759 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-l66xq" event={"ID":"1efc4fce-ab7b-4156-aabe-47611db76dc4","Type":"ContainerDied","Data":"8960d0b808e54b3d3dcdbaab7921049adf220a66b3cc982d9e72064205a11648"} Jan 26 11:12:29 crc kubenswrapper[4619]: I0126 11:12:29.697639 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-bf9c-account-create-update-r7s76" podStartSLOduration=2.697625033 podStartE2EDuration="2.697625033s" podCreationTimestamp="2026-01-26 11:12:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:12:29.693966721 +0000 UTC m=+1048.728007437" watchObservedRunningTime="2026-01-26 11:12:29.697625033 +0000 UTC m=+1048.731665749" Jan 26 11:12:29 crc kubenswrapper[4619]: I0126 11:12:29.704357 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2n6m9" event={"ID":"d96e2aac-889c-48f2-834b-2795a77c10d2","Type":"ContainerStarted","Data":"ac72704304b81f23b2ccc5ee50a81555259eff724527acbeb3f6eb42005bbf77"} Jan 26 11:12:29 crc kubenswrapper[4619]: I0126 11:12:29.732333 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e50c002e-11c3-4dc8-b32b-c962da06aecb","Type":"ContainerStarted","Data":"903ea2dba081ce537400645542cc55ddadb4d3e8d9ae291aa8aa804dcfb2bf89"} Jan 26 11:12:29 crc kubenswrapper[4619]: I0126 11:12:29.741842 4619 generic.go:334] "Generic (PLEG): container finished" podID="02dbcbe8-5527-40eb-b9a8-01c188281b3b" containerID="53e0439f074a8027768c23f947352e1e64902410b671bdc635853612ce33cead" exitCode=0 Jan 26 11:12:29 crc kubenswrapper[4619]: I0126 11:12:29.741944 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-djjzm-config-pmjg8" event={"ID":"02dbcbe8-5527-40eb-b9a8-01c188281b3b","Type":"ContainerDied","Data":"53e0439f074a8027768c23f947352e1e64902410b671bdc635853612ce33cead"} Jan 26 11:12:29 crc kubenswrapper[4619]: I0126 11:12:29.749726 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-9qh4x" event={"ID":"a025664f-5072-4d63-af0f-d1d92e019292","Type":"ContainerStarted","Data":"a7a7dfb414d6a0b14e825826fae45461144b1942ecdc57d537c9ce525714312d"} Jan 26 11:12:29 crc kubenswrapper[4619]: I0126 11:12:29.752649 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-2n6m9" podStartSLOduration=2.752603403 podStartE2EDuration="2.752603403s" podCreationTimestamp="2026-01-26 11:12:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:12:29.740813975 +0000 UTC m=+1048.774854691" watchObservedRunningTime="2026-01-26 11:12:29.752603403 +0000 UTC m=+1048.786644119" Jan 26 11:12:29 crc kubenswrapper[4619]: I0126 11:12:29.764592 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0751-account-create-update-6ddwf" event={"ID":"911c4b43-4424-4fcf-91ca-d1c4263dd12d","Type":"ContainerStarted","Data":"3cbfe2ad589d4c43ff7fb3422764371cce00dc7651b4039e7dc209aea8f34404"} Jan 26 11:12:29 crc kubenswrapper[4619]: I0126 11:12:29.764643 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0751-account-create-update-6ddwf" event={"ID":"911c4b43-4424-4fcf-91ca-d1c4263dd12d","Type":"ContainerStarted","Data":"5f53c92da4e8228d0bda8cb351831af93c6578a5f45aacc80d16393f2cbb785b"} Jan 26 11:12:29 crc kubenswrapper[4619]: I0126 11:12:29.797936 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-d078-account-create-update-vhccn" podStartSLOduration=3.797912954 podStartE2EDuration="3.797912954s" podCreationTimestamp="2026-01-26 11:12:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:12:29.79019316 +0000 UTC m=+1048.824233876" watchObservedRunningTime="2026-01-26 11:12:29.797912954 +0000 UTC m=+1048.831953670" Jan 26 11:12:29 crc kubenswrapper[4619]: I0126 11:12:29.819276 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-0751-account-create-update-6ddwf" podStartSLOduration=2.819258148 podStartE2EDuration="2.819258148s" podCreationTimestamp="2026-01-26 11:12:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:12:29.815905045 +0000 UTC m=+1048.849945761" watchObservedRunningTime="2026-01-26 11:12:29.819258148 +0000 UTC m=+1048.853298864" Jan 26 11:12:30 crc kubenswrapper[4619]: I0126 11:12:30.779052 4619 generic.go:334] "Generic (PLEG): container finished" podID="a025664f-5072-4d63-af0f-d1d92e019292" containerID="b3be53181c90b989cb6fefc6626a61dfce098ce102082e9faa2e5d9dc4b6b001" exitCode=0 Jan 26 11:12:30 crc kubenswrapper[4619]: I0126 11:12:30.779225 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-9qh4x" event={"ID":"a025664f-5072-4d63-af0f-d1d92e019292","Type":"ContainerDied","Data":"b3be53181c90b989cb6fefc6626a61dfce098ce102082e9faa2e5d9dc4b6b001"} Jan 26 11:12:30 crc kubenswrapper[4619]: I0126 11:12:30.783557 4619 generic.go:334] "Generic (PLEG): container finished" podID="911c4b43-4424-4fcf-91ca-d1c4263dd12d" containerID="3cbfe2ad589d4c43ff7fb3422764371cce00dc7651b4039e7dc209aea8f34404" exitCode=0 Jan 26 11:12:30 crc kubenswrapper[4619]: I0126 11:12:30.783645 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0751-account-create-update-6ddwf" event={"ID":"911c4b43-4424-4fcf-91ca-d1c4263dd12d","Type":"ContainerDied","Data":"3cbfe2ad589d4c43ff7fb3422764371cce00dc7651b4039e7dc209aea8f34404"} Jan 26 11:12:30 crc kubenswrapper[4619]: I0126 11:12:30.786187 4619 generic.go:334] "Generic (PLEG): container finished" podID="74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99" containerID="a858cedeadd66a6f0470c5e2e7365be5d50df4d22b4a182bc51714ed8e7b701c" exitCode=0 Jan 26 11:12:30 crc kubenswrapper[4619]: I0126 11:12:30.786266 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-bf9c-account-create-update-r7s76" event={"ID":"74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99","Type":"ContainerDied","Data":"a858cedeadd66a6f0470c5e2e7365be5d50df4d22b4a182bc51714ed8e7b701c"} Jan 26 11:12:30 crc kubenswrapper[4619]: I0126 11:12:30.794379 4619 generic.go:334] "Generic (PLEG): container finished" podID="d96e2aac-889c-48f2-834b-2795a77c10d2" containerID="ac72704304b81f23b2ccc5ee50a81555259eff724527acbeb3f6eb42005bbf77" exitCode=0 Jan 26 11:12:30 crc kubenswrapper[4619]: I0126 11:12:30.794473 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2n6m9" event={"ID":"d96e2aac-889c-48f2-834b-2795a77c10d2","Type":"ContainerDied","Data":"ac72704304b81f23b2ccc5ee50a81555259eff724527acbeb3f6eb42005bbf77"} Jan 26 11:12:30 crc kubenswrapper[4619]: I0126 11:12:30.811762 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e50c002e-11c3-4dc8-b32b-c962da06aecb","Type":"ContainerStarted","Data":"83c8a5e96b82a225b6905cca86b563ce8b44348da073f3eac5d10f1b26ff8ba6"} Jan 26 11:12:30 crc kubenswrapper[4619]: I0126 11:12:30.811805 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e50c002e-11c3-4dc8-b32b-c962da06aecb","Type":"ContainerStarted","Data":"ab7f30bf8c4b3ab61e874fbf0cf280ccbbbcff69e9a35e761c9968d566bff9d3"} Jan 26 11:12:30 crc kubenswrapper[4619]: I0126 11:12:30.811815 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e50c002e-11c3-4dc8-b32b-c962da06aecb","Type":"ContainerStarted","Data":"d9dcce4f4163846ba70d9f8ba83e745e6f29d8fa751c6e5c529f18ab67c4cba4"} Jan 26 11:12:30 crc kubenswrapper[4619]: I0126 11:12:30.816065 4619 generic.go:334] "Generic (PLEG): container finished" podID="0a7a9429-31bf-450f-8bbc-5732f2073487" containerID="f6bea69ee45a26ee23ec1e0d4dc318e40aa1c6feca121f1040eb43bb5b01a077" exitCode=0 Jan 26 11:12:30 crc kubenswrapper[4619]: I0126 11:12:30.816263 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d078-account-create-update-vhccn" event={"ID":"0a7a9429-31bf-450f-8bbc-5732f2073487","Type":"ContainerDied","Data":"f6bea69ee45a26ee23ec1e0d4dc318e40aa1c6feca121f1040eb43bb5b01a077"} Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.183111 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-djjzm-config-pmjg8" Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.214449 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/02dbcbe8-5527-40eb-b9a8-01c188281b3b-var-run\") pod \"02dbcbe8-5527-40eb-b9a8-01c188281b3b\" (UID: \"02dbcbe8-5527-40eb-b9a8-01c188281b3b\") " Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.214540 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02dbcbe8-5527-40eb-b9a8-01c188281b3b-scripts\") pod \"02dbcbe8-5527-40eb-b9a8-01c188281b3b\" (UID: \"02dbcbe8-5527-40eb-b9a8-01c188281b3b\") " Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.214602 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/02dbcbe8-5527-40eb-b9a8-01c188281b3b-var-log-ovn\") pod \"02dbcbe8-5527-40eb-b9a8-01c188281b3b\" (UID: \"02dbcbe8-5527-40eb-b9a8-01c188281b3b\") " Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.214688 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4sx9\" (UniqueName: \"kubernetes.io/projected/02dbcbe8-5527-40eb-b9a8-01c188281b3b-kube-api-access-h4sx9\") pod \"02dbcbe8-5527-40eb-b9a8-01c188281b3b\" (UID: \"02dbcbe8-5527-40eb-b9a8-01c188281b3b\") " Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.214743 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/02dbcbe8-5527-40eb-b9a8-01c188281b3b-additional-scripts\") pod \"02dbcbe8-5527-40eb-b9a8-01c188281b3b\" (UID: \"02dbcbe8-5527-40eb-b9a8-01c188281b3b\") " Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.214766 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/02dbcbe8-5527-40eb-b9a8-01c188281b3b-var-run-ovn\") pod \"02dbcbe8-5527-40eb-b9a8-01c188281b3b\" (UID: \"02dbcbe8-5527-40eb-b9a8-01c188281b3b\") " Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.214842 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02dbcbe8-5527-40eb-b9a8-01c188281b3b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "02dbcbe8-5527-40eb-b9a8-01c188281b3b" (UID: "02dbcbe8-5527-40eb-b9a8-01c188281b3b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.214895 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02dbcbe8-5527-40eb-b9a8-01c188281b3b-var-run" (OuterVolumeSpecName: "var-run") pod "02dbcbe8-5527-40eb-b9a8-01c188281b3b" (UID: "02dbcbe8-5527-40eb-b9a8-01c188281b3b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.215137 4619 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/02dbcbe8-5527-40eb-b9a8-01c188281b3b-var-run\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.215156 4619 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/02dbcbe8-5527-40eb-b9a8-01c188281b3b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.215212 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02dbcbe8-5527-40eb-b9a8-01c188281b3b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "02dbcbe8-5527-40eb-b9a8-01c188281b3b" (UID: "02dbcbe8-5527-40eb-b9a8-01c188281b3b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.215896 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02dbcbe8-5527-40eb-b9a8-01c188281b3b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "02dbcbe8-5527-40eb-b9a8-01c188281b3b" (UID: "02dbcbe8-5527-40eb-b9a8-01c188281b3b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.216157 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02dbcbe8-5527-40eb-b9a8-01c188281b3b-scripts" (OuterVolumeSpecName: "scripts") pod "02dbcbe8-5527-40eb-b9a8-01c188281b3b" (UID: "02dbcbe8-5527-40eb-b9a8-01c188281b3b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.235009 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02dbcbe8-5527-40eb-b9a8-01c188281b3b-kube-api-access-h4sx9" (OuterVolumeSpecName: "kube-api-access-h4sx9") pod "02dbcbe8-5527-40eb-b9a8-01c188281b3b" (UID: "02dbcbe8-5527-40eb-b9a8-01c188281b3b"). InnerVolumeSpecName "kube-api-access-h4sx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.316466 4619 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/02dbcbe8-5527-40eb-b9a8-01c188281b3b-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.316496 4619 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/02dbcbe8-5527-40eb-b9a8-01c188281b3b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.316506 4619 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02dbcbe8-5527-40eb-b9a8-01c188281b3b-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.316541 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4sx9\" (UniqueName: \"kubernetes.io/projected/02dbcbe8-5527-40eb-b9a8-01c188281b3b-kube-api-access-h4sx9\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.387696 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-l66xq" Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.417766 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnmj4\" (UniqueName: \"kubernetes.io/projected/1efc4fce-ab7b-4156-aabe-47611db76dc4-kube-api-access-fnmj4\") pod \"1efc4fce-ab7b-4156-aabe-47611db76dc4\" (UID: \"1efc4fce-ab7b-4156-aabe-47611db76dc4\") " Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.417998 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1efc4fce-ab7b-4156-aabe-47611db76dc4-operator-scripts\") pod \"1efc4fce-ab7b-4156-aabe-47611db76dc4\" (UID: \"1efc4fce-ab7b-4156-aabe-47611db76dc4\") " Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.418742 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1efc4fce-ab7b-4156-aabe-47611db76dc4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1efc4fce-ab7b-4156-aabe-47611db76dc4" (UID: "1efc4fce-ab7b-4156-aabe-47611db76dc4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.419768 4619 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1efc4fce-ab7b-4156-aabe-47611db76dc4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.422364 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1efc4fce-ab7b-4156-aabe-47611db76dc4-kube-api-access-fnmj4" (OuterVolumeSpecName: "kube-api-access-fnmj4") pod "1efc4fce-ab7b-4156-aabe-47611db76dc4" (UID: "1efc4fce-ab7b-4156-aabe-47611db76dc4"). InnerVolumeSpecName "kube-api-access-fnmj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.520932 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnmj4\" (UniqueName: \"kubernetes.io/projected/1efc4fce-ab7b-4156-aabe-47611db76dc4-kube-api-access-fnmj4\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.671530 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-djjzm-config-pmjg8"] Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.678442 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-djjzm-config-pmjg8"] Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.836812 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e50c002e-11c3-4dc8-b32b-c962da06aecb","Type":"ContainerStarted","Data":"d3539a147c9c68428579276ff088da0af5c98bf5e9f063576e0c80e18eecd990"} Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.836854 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e50c002e-11c3-4dc8-b32b-c962da06aecb","Type":"ContainerStarted","Data":"df094df20b752dec4eabcaf34c1268268c0764f0383d681c2a01882191355f4c"} Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.844566 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e988cdce6b0360369590043c01e0dfef6f1125d37e77ae82d6f744978cbb0aa" Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.844706 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-djjzm-config-pmjg8" Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.851461 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-l66xq" Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.851704 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-l66xq" event={"ID":"1efc4fce-ab7b-4156-aabe-47611db76dc4","Type":"ContainerDied","Data":"50926a9ad432a53c2aa0ee59a3b0c9da51e1c0454cbafccc48f12adc45f9508d"} Jan 26 11:12:31 crc kubenswrapper[4619]: I0126 11:12:31.851770 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50926a9ad432a53c2aa0ee59a3b0c9da51e1c0454cbafccc48f12adc45f9508d" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.261711 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2n6m9" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.331968 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnbqp\" (UniqueName: \"kubernetes.io/projected/d96e2aac-889c-48f2-834b-2795a77c10d2-kube-api-access-bnbqp\") pod \"d96e2aac-889c-48f2-834b-2795a77c10d2\" (UID: \"d96e2aac-889c-48f2-834b-2795a77c10d2\") " Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.332165 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d96e2aac-889c-48f2-834b-2795a77c10d2-operator-scripts\") pod \"d96e2aac-889c-48f2-834b-2795a77c10d2\" (UID: \"d96e2aac-889c-48f2-834b-2795a77c10d2\") " Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.336490 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d96e2aac-889c-48f2-834b-2795a77c10d2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d96e2aac-889c-48f2-834b-2795a77c10d2" (UID: "d96e2aac-889c-48f2-834b-2795a77c10d2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.351777 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d96e2aac-889c-48f2-834b-2795a77c10d2-kube-api-access-bnbqp" (OuterVolumeSpecName: "kube-api-access-bnbqp") pod "d96e2aac-889c-48f2-834b-2795a77c10d2" (UID: "d96e2aac-889c-48f2-834b-2795a77c10d2"). InnerVolumeSpecName "kube-api-access-bnbqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.394019 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0751-account-create-update-6ddwf" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.433043 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/911c4b43-4424-4fcf-91ca-d1c4263dd12d-operator-scripts\") pod \"911c4b43-4424-4fcf-91ca-d1c4263dd12d\" (UID: \"911c4b43-4424-4fcf-91ca-d1c4263dd12d\") " Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.433196 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqbhj\" (UniqueName: \"kubernetes.io/projected/911c4b43-4424-4fcf-91ca-d1c4263dd12d-kube-api-access-vqbhj\") pod \"911c4b43-4424-4fcf-91ca-d1c4263dd12d\" (UID: \"911c4b43-4424-4fcf-91ca-d1c4263dd12d\") " Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.433418 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnbqp\" (UniqueName: \"kubernetes.io/projected/d96e2aac-889c-48f2-834b-2795a77c10d2-kube-api-access-bnbqp\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.433431 4619 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d96e2aac-889c-48f2-834b-2795a77c10d2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.436910 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/911c4b43-4424-4fcf-91ca-d1c4263dd12d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "911c4b43-4424-4fcf-91ca-d1c4263dd12d" (UID: "911c4b43-4424-4fcf-91ca-d1c4263dd12d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.439424 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/911c4b43-4424-4fcf-91ca-d1c4263dd12d-kube-api-access-vqbhj" (OuterVolumeSpecName: "kube-api-access-vqbhj") pod "911c4b43-4424-4fcf-91ca-d1c4263dd12d" (UID: "911c4b43-4424-4fcf-91ca-d1c4263dd12d"). InnerVolumeSpecName "kube-api-access-vqbhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.444295 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9qh4x" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.452330 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-bf9c-account-create-update-r7s76" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.464070 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d078-account-create-update-vhccn" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.533965 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb7vb\" (UniqueName: \"kubernetes.io/projected/a025664f-5072-4d63-af0f-d1d92e019292-kube-api-access-nb7vb\") pod \"a025664f-5072-4d63-af0f-d1d92e019292\" (UID: \"a025664f-5072-4d63-af0f-d1d92e019292\") " Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.534018 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99-operator-scripts\") pod \"74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99\" (UID: \"74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99\") " Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.534072 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92xqp\" (UniqueName: \"kubernetes.io/projected/0a7a9429-31bf-450f-8bbc-5732f2073487-kube-api-access-92xqp\") pod \"0a7a9429-31bf-450f-8bbc-5732f2073487\" (UID: \"0a7a9429-31bf-450f-8bbc-5732f2073487\") " Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.534761 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99" (UID: "74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.534093 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a025664f-5072-4d63-af0f-d1d92e019292-operator-scripts\") pod \"a025664f-5072-4d63-af0f-d1d92e019292\" (UID: \"a025664f-5072-4d63-af0f-d1d92e019292\") " Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.535003 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a7a9429-31bf-450f-8bbc-5732f2073487-operator-scripts\") pod \"0a7a9429-31bf-450f-8bbc-5732f2073487\" (UID: \"0a7a9429-31bf-450f-8bbc-5732f2073487\") " Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.535075 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqzmr\" (UniqueName: \"kubernetes.io/projected/74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99-kube-api-access-nqzmr\") pod \"74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99\" (UID: \"74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99\") " Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.535173 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a025664f-5072-4d63-af0f-d1d92e019292-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a025664f-5072-4d63-af0f-d1d92e019292" (UID: "a025664f-5072-4d63-af0f-d1d92e019292"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.535768 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a7a9429-31bf-450f-8bbc-5732f2073487-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0a7a9429-31bf-450f-8bbc-5732f2073487" (UID: "0a7a9429-31bf-450f-8bbc-5732f2073487"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.536025 4619 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a025664f-5072-4d63-af0f-d1d92e019292-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.536051 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqbhj\" (UniqueName: \"kubernetes.io/projected/911c4b43-4424-4fcf-91ca-d1c4263dd12d-kube-api-access-vqbhj\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.536066 4619 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.536096 4619 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/911c4b43-4424-4fcf-91ca-d1c4263dd12d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.537426 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a025664f-5072-4d63-af0f-d1d92e019292-kube-api-access-nb7vb" (OuterVolumeSpecName: "kube-api-access-nb7vb") pod "a025664f-5072-4d63-af0f-d1d92e019292" (UID: "a025664f-5072-4d63-af0f-d1d92e019292"). InnerVolumeSpecName "kube-api-access-nb7vb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.538026 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a7a9429-31bf-450f-8bbc-5732f2073487-kube-api-access-92xqp" (OuterVolumeSpecName: "kube-api-access-92xqp") pod "0a7a9429-31bf-450f-8bbc-5732f2073487" (UID: "0a7a9429-31bf-450f-8bbc-5732f2073487"). InnerVolumeSpecName "kube-api-access-92xqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.539340 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99-kube-api-access-nqzmr" (OuterVolumeSpecName: "kube-api-access-nqzmr") pod "74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99" (UID: "74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99"). InnerVolumeSpecName "kube-api-access-nqzmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.638024 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqzmr\" (UniqueName: \"kubernetes.io/projected/74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99-kube-api-access-nqzmr\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.638058 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb7vb\" (UniqueName: \"kubernetes.io/projected/a025664f-5072-4d63-af0f-d1d92e019292-kube-api-access-nb7vb\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.638098 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92xqp\" (UniqueName: \"kubernetes.io/projected/0a7a9429-31bf-450f-8bbc-5732f2073487-kube-api-access-92xqp\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.638111 4619 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a7a9429-31bf-450f-8bbc-5732f2073487-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.864892 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0751-account-create-update-6ddwf" event={"ID":"911c4b43-4424-4fcf-91ca-d1c4263dd12d","Type":"ContainerDied","Data":"5f53c92da4e8228d0bda8cb351831af93c6578a5f45aacc80d16393f2cbb785b"} Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.864931 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f53c92da4e8228d0bda8cb351831af93c6578a5f45aacc80d16393f2cbb785b" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.864988 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0751-account-create-update-6ddwf" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.867812 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-bf9c-account-create-update-r7s76" event={"ID":"74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99","Type":"ContainerDied","Data":"14cb15ff499fc642040c39e7dd3f861f18d3cc098b7f39bf7f41e8c685c1140a"} Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.867839 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14cb15ff499fc642040c39e7dd3f861f18d3cc098b7f39bf7f41e8c685c1140a" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.867881 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-bf9c-account-create-update-r7s76" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.879444 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2n6m9" event={"ID":"d96e2aac-889c-48f2-834b-2795a77c10d2","Type":"ContainerDied","Data":"86b5c18e2628b6537cb0339834b5b4eebe61080bb1c773dff5d812c02cca4884"} Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.879479 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86b5c18e2628b6537cb0339834b5b4eebe61080bb1c773dff5d812c02cca4884" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.879536 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2n6m9" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.891284 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e50c002e-11c3-4dc8-b32b-c962da06aecb","Type":"ContainerStarted","Data":"152acbe806bd03a63ce7dd4e8f0c7caecb283aa61e71a27f633920c486fa9a4b"} Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.894312 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d078-account-create-update-vhccn" event={"ID":"0a7a9429-31bf-450f-8bbc-5732f2073487","Type":"ContainerDied","Data":"75a3cda72023d51484ddceb2ce7dc668a403f762a4ef9db3fb1a24cafe1b7b12"} Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.894361 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75a3cda72023d51484ddceb2ce7dc668a403f762a4ef9db3fb1a24cafe1b7b12" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.894370 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d078-account-create-update-vhccn" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.895979 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-9qh4x" event={"ID":"a025664f-5072-4d63-af0f-d1d92e019292","Type":"ContainerDied","Data":"a7a7dfb414d6a0b14e825826fae45461144b1942ecdc57d537c9ce525714312d"} Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.896012 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7a7dfb414d6a0b14e825826fae45461144b1942ecdc57d537c9ce525714312d" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.896219 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9qh4x" Jan 26 11:12:32 crc kubenswrapper[4619]: I0126 11:12:32.949876 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=45.137300848 podStartE2EDuration="51.949853662s" podCreationTimestamp="2026-01-26 11:11:41 +0000 UTC" firstStartedPulling="2026-01-26 11:12:22.467881737 +0000 UTC m=+1041.501922453" lastFinishedPulling="2026-01-26 11:12:29.280434551 +0000 UTC m=+1048.314475267" observedRunningTime="2026-01-26 11:12:32.938489336 +0000 UTC m=+1051.972530052" watchObservedRunningTime="2026-01-26 11:12:32.949853662 +0000 UTC m=+1051.983894378" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.219287 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-b6wlt"] Jan 26 11:12:33 crc kubenswrapper[4619]: E0126 11:12:33.219926 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a025664f-5072-4d63-af0f-d1d92e019292" containerName="mariadb-database-create" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.219939 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="a025664f-5072-4d63-af0f-d1d92e019292" containerName="mariadb-database-create" Jan 26 11:12:33 crc kubenswrapper[4619]: E0126 11:12:33.219950 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d96e2aac-889c-48f2-834b-2795a77c10d2" containerName="mariadb-database-create" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.219955 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="d96e2aac-889c-48f2-834b-2795a77c10d2" containerName="mariadb-database-create" Jan 26 11:12:33 crc kubenswrapper[4619]: E0126 11:12:33.219970 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a7a9429-31bf-450f-8bbc-5732f2073487" containerName="mariadb-account-create-update" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.219978 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a7a9429-31bf-450f-8bbc-5732f2073487" containerName="mariadb-account-create-update" Jan 26 11:12:33 crc kubenswrapper[4619]: E0126 11:12:33.219988 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1efc4fce-ab7b-4156-aabe-47611db76dc4" containerName="mariadb-database-create" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.219993 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="1efc4fce-ab7b-4156-aabe-47611db76dc4" containerName="mariadb-database-create" Jan 26 11:12:33 crc kubenswrapper[4619]: E0126 11:12:33.220004 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02dbcbe8-5527-40eb-b9a8-01c188281b3b" containerName="ovn-config" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.220010 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="02dbcbe8-5527-40eb-b9a8-01c188281b3b" containerName="ovn-config" Jan 26 11:12:33 crc kubenswrapper[4619]: E0126 11:12:33.220024 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="911c4b43-4424-4fcf-91ca-d1c4263dd12d" containerName="mariadb-account-create-update" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.220030 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="911c4b43-4424-4fcf-91ca-d1c4263dd12d" containerName="mariadb-account-create-update" Jan 26 11:12:33 crc kubenswrapper[4619]: E0126 11:12:33.220036 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99" containerName="mariadb-account-create-update" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.220041 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99" containerName="mariadb-account-create-update" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.220202 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99" containerName="mariadb-account-create-update" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.220222 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a7a9429-31bf-450f-8bbc-5732f2073487" containerName="mariadb-account-create-update" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.220232 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="a025664f-5072-4d63-af0f-d1d92e019292" containerName="mariadb-database-create" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.220242 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="1efc4fce-ab7b-4156-aabe-47611db76dc4" containerName="mariadb-database-create" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.220252 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="d96e2aac-889c-48f2-834b-2795a77c10d2" containerName="mariadb-database-create" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.220262 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="02dbcbe8-5527-40eb-b9a8-01c188281b3b" containerName="ovn-config" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.220268 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="911c4b43-4424-4fcf-91ca-d1c4263dd12d" containerName="mariadb-account-create-update" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.221141 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-b6wlt" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.222704 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.236497 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-b6wlt"] Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.281904 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02dbcbe8-5527-40eb-b9a8-01c188281b3b" path="/var/lib/kubelet/pods/02dbcbe8-5527-40eb-b9a8-01c188281b3b/volumes" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.352552 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e721746-254e-483a-b19b-cca24a0824d1-dns-svc\") pod \"dnsmasq-dns-764c5664d7-b6wlt\" (UID: \"5e721746-254e-483a-b19b-cca24a0824d1\") " pod="openstack/dnsmasq-dns-764c5664d7-b6wlt" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.352653 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dtpl\" (UniqueName: \"kubernetes.io/projected/5e721746-254e-483a-b19b-cca24a0824d1-kube-api-access-7dtpl\") pod \"dnsmasq-dns-764c5664d7-b6wlt\" (UID: \"5e721746-254e-483a-b19b-cca24a0824d1\") " pod="openstack/dnsmasq-dns-764c5664d7-b6wlt" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.352678 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e721746-254e-483a-b19b-cca24a0824d1-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-b6wlt\" (UID: \"5e721746-254e-483a-b19b-cca24a0824d1\") " pod="openstack/dnsmasq-dns-764c5664d7-b6wlt" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.352698 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e721746-254e-483a-b19b-cca24a0824d1-config\") pod \"dnsmasq-dns-764c5664d7-b6wlt\" (UID: \"5e721746-254e-483a-b19b-cca24a0824d1\") " pod="openstack/dnsmasq-dns-764c5664d7-b6wlt" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.352734 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5e721746-254e-483a-b19b-cca24a0824d1-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-b6wlt\" (UID: \"5e721746-254e-483a-b19b-cca24a0824d1\") " pod="openstack/dnsmasq-dns-764c5664d7-b6wlt" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.353181 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e721746-254e-483a-b19b-cca24a0824d1-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-b6wlt\" (UID: \"5e721746-254e-483a-b19b-cca24a0824d1\") " pod="openstack/dnsmasq-dns-764c5664d7-b6wlt" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.455443 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e721746-254e-483a-b19b-cca24a0824d1-dns-svc\") pod \"dnsmasq-dns-764c5664d7-b6wlt\" (UID: \"5e721746-254e-483a-b19b-cca24a0824d1\") " pod="openstack/dnsmasq-dns-764c5664d7-b6wlt" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.455514 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dtpl\" (UniqueName: \"kubernetes.io/projected/5e721746-254e-483a-b19b-cca24a0824d1-kube-api-access-7dtpl\") pod \"dnsmasq-dns-764c5664d7-b6wlt\" (UID: \"5e721746-254e-483a-b19b-cca24a0824d1\") " pod="openstack/dnsmasq-dns-764c5664d7-b6wlt" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.455536 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e721746-254e-483a-b19b-cca24a0824d1-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-b6wlt\" (UID: \"5e721746-254e-483a-b19b-cca24a0824d1\") " pod="openstack/dnsmasq-dns-764c5664d7-b6wlt" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.455553 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e721746-254e-483a-b19b-cca24a0824d1-config\") pod \"dnsmasq-dns-764c5664d7-b6wlt\" (UID: \"5e721746-254e-483a-b19b-cca24a0824d1\") " pod="openstack/dnsmasq-dns-764c5664d7-b6wlt" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.455588 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5e721746-254e-483a-b19b-cca24a0824d1-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-b6wlt\" (UID: \"5e721746-254e-483a-b19b-cca24a0824d1\") " pod="openstack/dnsmasq-dns-764c5664d7-b6wlt" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.455686 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e721746-254e-483a-b19b-cca24a0824d1-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-b6wlt\" (UID: \"5e721746-254e-483a-b19b-cca24a0824d1\") " pod="openstack/dnsmasq-dns-764c5664d7-b6wlt" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.457023 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e721746-254e-483a-b19b-cca24a0824d1-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-b6wlt\" (UID: \"5e721746-254e-483a-b19b-cca24a0824d1\") " pod="openstack/dnsmasq-dns-764c5664d7-b6wlt" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.457326 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5e721746-254e-483a-b19b-cca24a0824d1-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-b6wlt\" (UID: \"5e721746-254e-483a-b19b-cca24a0824d1\") " pod="openstack/dnsmasq-dns-764c5664d7-b6wlt" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.457327 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e721746-254e-483a-b19b-cca24a0824d1-config\") pod \"dnsmasq-dns-764c5664d7-b6wlt\" (UID: \"5e721746-254e-483a-b19b-cca24a0824d1\") " pod="openstack/dnsmasq-dns-764c5664d7-b6wlt" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.457888 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e721746-254e-483a-b19b-cca24a0824d1-dns-svc\") pod \"dnsmasq-dns-764c5664d7-b6wlt\" (UID: \"5e721746-254e-483a-b19b-cca24a0824d1\") " pod="openstack/dnsmasq-dns-764c5664d7-b6wlt" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.458546 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e721746-254e-483a-b19b-cca24a0824d1-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-b6wlt\" (UID: \"5e721746-254e-483a-b19b-cca24a0824d1\") " pod="openstack/dnsmasq-dns-764c5664d7-b6wlt" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.482398 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dtpl\" (UniqueName: \"kubernetes.io/projected/5e721746-254e-483a-b19b-cca24a0824d1-kube-api-access-7dtpl\") pod \"dnsmasq-dns-764c5664d7-b6wlt\" (UID: \"5e721746-254e-483a-b19b-cca24a0824d1\") " pod="openstack/dnsmasq-dns-764c5664d7-b6wlt" Jan 26 11:12:33 crc kubenswrapper[4619]: I0126 11:12:33.542539 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-b6wlt" Jan 26 11:12:35 crc kubenswrapper[4619]: I0126 11:12:35.935368 4619 generic.go:334] "Generic (PLEG): container finished" podID="ad779660-a430-4b5c-91dd-41b9582a4215" containerID="0852227a61d229754e598aba2008de299e2be76bfdb8865031a60afed1b61057" exitCode=0 Jan 26 11:12:35 crc kubenswrapper[4619]: I0126 11:12:35.935847 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8bqkm" event={"ID":"ad779660-a430-4b5c-91dd-41b9582a4215","Type":"ContainerDied","Data":"0852227a61d229754e598aba2008de299e2be76bfdb8865031a60afed1b61057"} Jan 26 11:12:36 crc kubenswrapper[4619]: I0126 11:12:36.572329 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-b6wlt"] Jan 26 11:12:36 crc kubenswrapper[4619]: W0126 11:12:36.581964 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e721746_254e_483a_b19b_cca24a0824d1.slice/crio-f1338690b9d4f62a87a85b327f8bd7425b27d08d9d8504c551cd491f40512874 WatchSource:0}: Error finding container f1338690b9d4f62a87a85b327f8bd7425b27d08d9d8504c551cd491f40512874: Status 404 returned error can't find the container with id f1338690b9d4f62a87a85b327f8bd7425b27d08d9d8504c551cd491f40512874 Jan 26 11:12:36 crc kubenswrapper[4619]: I0126 11:12:36.949434 4619 generic.go:334] "Generic (PLEG): container finished" podID="5e721746-254e-483a-b19b-cca24a0824d1" containerID="03ef4283f8b390381085c7490ad5b4147279bd37444bf6e0ea2172b47bfc3caa" exitCode=0 Jan 26 11:12:36 crc kubenswrapper[4619]: I0126 11:12:36.949714 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-b6wlt" event={"ID":"5e721746-254e-483a-b19b-cca24a0824d1","Type":"ContainerDied","Data":"03ef4283f8b390381085c7490ad5b4147279bd37444bf6e0ea2172b47bfc3caa"} Jan 26 11:12:36 crc kubenswrapper[4619]: I0126 11:12:36.949739 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-b6wlt" event={"ID":"5e721746-254e-483a-b19b-cca24a0824d1","Type":"ContainerStarted","Data":"f1338690b9d4f62a87a85b327f8bd7425b27d08d9d8504c551cd491f40512874"} Jan 26 11:12:36 crc kubenswrapper[4619]: I0126 11:12:36.959702 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-74bc6" event={"ID":"d8fa96fb-5f34-4f6c-932b-14420024f02d","Type":"ContainerStarted","Data":"20b1759c869725f0dcb69fe39834fbb9b37c82a13f1ace62b905491a934a6013"} Jan 26 11:12:37 crc kubenswrapper[4619]: I0126 11:12:37.048147 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-74bc6" podStartSLOduration=2.328020935 podStartE2EDuration="10.048130569s" podCreationTimestamp="2026-01-26 11:12:27 +0000 UTC" firstStartedPulling="2026-01-26 11:12:28.553726824 +0000 UTC m=+1047.587767540" lastFinishedPulling="2026-01-26 11:12:36.273836448 +0000 UTC m=+1055.307877174" observedRunningTime="2026-01-26 11:12:37.041216637 +0000 UTC m=+1056.075257353" watchObservedRunningTime="2026-01-26 11:12:37.048130569 +0000 UTC m=+1056.082171275" Jan 26 11:12:37 crc kubenswrapper[4619]: I0126 11:12:37.552754 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8bqkm" Jan 26 11:12:37 crc kubenswrapper[4619]: I0126 11:12:37.738350 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad779660-a430-4b5c-91dd-41b9582a4215-config-data\") pod \"ad779660-a430-4b5c-91dd-41b9582a4215\" (UID: \"ad779660-a430-4b5c-91dd-41b9582a4215\") " Jan 26 11:12:37 crc kubenswrapper[4619]: I0126 11:12:37.738409 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad779660-a430-4b5c-91dd-41b9582a4215-combined-ca-bundle\") pod \"ad779660-a430-4b5c-91dd-41b9582a4215\" (UID: \"ad779660-a430-4b5c-91dd-41b9582a4215\") " Jan 26 11:12:37 crc kubenswrapper[4619]: I0126 11:12:37.738493 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st9ps\" (UniqueName: \"kubernetes.io/projected/ad779660-a430-4b5c-91dd-41b9582a4215-kube-api-access-st9ps\") pod \"ad779660-a430-4b5c-91dd-41b9582a4215\" (UID: \"ad779660-a430-4b5c-91dd-41b9582a4215\") " Jan 26 11:12:37 crc kubenswrapper[4619]: I0126 11:12:37.738583 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad779660-a430-4b5c-91dd-41b9582a4215-db-sync-config-data\") pod \"ad779660-a430-4b5c-91dd-41b9582a4215\" (UID: \"ad779660-a430-4b5c-91dd-41b9582a4215\") " Jan 26 11:12:37 crc kubenswrapper[4619]: I0126 11:12:37.746178 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad779660-a430-4b5c-91dd-41b9582a4215-kube-api-access-st9ps" (OuterVolumeSpecName: "kube-api-access-st9ps") pod "ad779660-a430-4b5c-91dd-41b9582a4215" (UID: "ad779660-a430-4b5c-91dd-41b9582a4215"). InnerVolumeSpecName "kube-api-access-st9ps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:12:37 crc kubenswrapper[4619]: I0126 11:12:37.760023 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad779660-a430-4b5c-91dd-41b9582a4215-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ad779660-a430-4b5c-91dd-41b9582a4215" (UID: "ad779660-a430-4b5c-91dd-41b9582a4215"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:12:37 crc kubenswrapper[4619]: I0126 11:12:37.766992 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad779660-a430-4b5c-91dd-41b9582a4215-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad779660-a430-4b5c-91dd-41b9582a4215" (UID: "ad779660-a430-4b5c-91dd-41b9582a4215"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:12:37 crc kubenswrapper[4619]: I0126 11:12:37.792554 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad779660-a430-4b5c-91dd-41b9582a4215-config-data" (OuterVolumeSpecName: "config-data") pod "ad779660-a430-4b5c-91dd-41b9582a4215" (UID: "ad779660-a430-4b5c-91dd-41b9582a4215"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:12:37 crc kubenswrapper[4619]: I0126 11:12:37.841058 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad779660-a430-4b5c-91dd-41b9582a4215-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:37 crc kubenswrapper[4619]: I0126 11:12:37.841089 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad779660-a430-4b5c-91dd-41b9582a4215-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:37 crc kubenswrapper[4619]: I0126 11:12:37.841101 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st9ps\" (UniqueName: \"kubernetes.io/projected/ad779660-a430-4b5c-91dd-41b9582a4215-kube-api-access-st9ps\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:37 crc kubenswrapper[4619]: I0126 11:12:37.841124 4619 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad779660-a430-4b5c-91dd-41b9582a4215-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:37 crc kubenswrapper[4619]: I0126 11:12:37.973809 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-b6wlt" event={"ID":"5e721746-254e-483a-b19b-cca24a0824d1","Type":"ContainerStarted","Data":"eea4215bcdb023e917950d60b7a241dfc4060227dd0bbcba0564e9f92f163353"} Jan 26 11:12:37 crc kubenswrapper[4619]: I0126 11:12:37.973944 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-b6wlt" Jan 26 11:12:37 crc kubenswrapper[4619]: I0126 11:12:37.976340 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8bqkm" event={"ID":"ad779660-a430-4b5c-91dd-41b9582a4215","Type":"ContainerDied","Data":"2ca7b8222f2f2a766ddf76a24f605b407f9508393b5f4926f5af74075781a690"} Jan 26 11:12:37 crc kubenswrapper[4619]: I0126 11:12:37.976372 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8bqkm" Jan 26 11:12:37 crc kubenswrapper[4619]: I0126 11:12:37.976378 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ca7b8222f2f2a766ddf76a24f605b407f9508393b5f4926f5af74075781a690" Jan 26 11:12:38 crc kubenswrapper[4619]: I0126 11:12:38.004300 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-b6wlt" podStartSLOduration=5.004279802 podStartE2EDuration="5.004279802s" podCreationTimestamp="2026-01-26 11:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:12:37.998373488 +0000 UTC m=+1057.032414204" watchObservedRunningTime="2026-01-26 11:12:38.004279802 +0000 UTC m=+1057.038320518" Jan 26 11:12:38 crc kubenswrapper[4619]: I0126 11:12:38.441687 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-b6wlt"] Jan 26 11:12:38 crc kubenswrapper[4619]: I0126 11:12:38.507732 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-mwg4l"] Jan 26 11:12:38 crc kubenswrapper[4619]: E0126 11:12:38.508283 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad779660-a430-4b5c-91dd-41b9582a4215" containerName="glance-db-sync" Jan 26 11:12:38 crc kubenswrapper[4619]: I0126 11:12:38.508471 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad779660-a430-4b5c-91dd-41b9582a4215" containerName="glance-db-sync" Jan 26 11:12:38 crc kubenswrapper[4619]: I0126 11:12:38.508755 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad779660-a430-4b5c-91dd-41b9582a4215" containerName="glance-db-sync" Jan 26 11:12:38 crc kubenswrapper[4619]: I0126 11:12:38.509648 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-mwg4l" Jan 26 11:12:38 crc kubenswrapper[4619]: I0126 11:12:38.540520 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-mwg4l"] Jan 26 11:12:38 crc kubenswrapper[4619]: I0126 11:12:38.651218 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-mwg4l\" (UID: \"8a2524a2-4811-44c1-9a0d-9050ba69ea1c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mwg4l" Jan 26 11:12:38 crc kubenswrapper[4619]: I0126 11:12:38.651265 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-mwg4l\" (UID: \"8a2524a2-4811-44c1-9a0d-9050ba69ea1c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mwg4l" Jan 26 11:12:38 crc kubenswrapper[4619]: I0126 11:12:38.651308 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grkqz\" (UniqueName: \"kubernetes.io/projected/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-kube-api-access-grkqz\") pod \"dnsmasq-dns-74f6bcbc87-mwg4l\" (UID: \"8a2524a2-4811-44c1-9a0d-9050ba69ea1c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mwg4l" Jan 26 11:12:38 crc kubenswrapper[4619]: I0126 11:12:38.651384 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-mwg4l\" (UID: \"8a2524a2-4811-44c1-9a0d-9050ba69ea1c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mwg4l" Jan 26 11:12:38 crc kubenswrapper[4619]: I0126 11:12:38.651425 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-mwg4l\" (UID: \"8a2524a2-4811-44c1-9a0d-9050ba69ea1c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mwg4l" Jan 26 11:12:38 crc kubenswrapper[4619]: I0126 11:12:38.651446 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-config\") pod \"dnsmasq-dns-74f6bcbc87-mwg4l\" (UID: \"8a2524a2-4811-44c1-9a0d-9050ba69ea1c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mwg4l" Jan 26 11:12:38 crc kubenswrapper[4619]: I0126 11:12:38.752843 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grkqz\" (UniqueName: \"kubernetes.io/projected/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-kube-api-access-grkqz\") pod \"dnsmasq-dns-74f6bcbc87-mwg4l\" (UID: \"8a2524a2-4811-44c1-9a0d-9050ba69ea1c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mwg4l" Jan 26 11:12:38 crc kubenswrapper[4619]: I0126 11:12:38.752940 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-mwg4l\" (UID: \"8a2524a2-4811-44c1-9a0d-9050ba69ea1c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mwg4l" Jan 26 11:12:38 crc kubenswrapper[4619]: I0126 11:12:38.752984 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-mwg4l\" (UID: \"8a2524a2-4811-44c1-9a0d-9050ba69ea1c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mwg4l" Jan 26 11:12:38 crc kubenswrapper[4619]: I0126 11:12:38.753009 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-config\") pod \"dnsmasq-dns-74f6bcbc87-mwg4l\" (UID: \"8a2524a2-4811-44c1-9a0d-9050ba69ea1c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mwg4l" Jan 26 11:12:38 crc kubenswrapper[4619]: I0126 11:12:38.753059 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-mwg4l\" (UID: \"8a2524a2-4811-44c1-9a0d-9050ba69ea1c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mwg4l" Jan 26 11:12:38 crc kubenswrapper[4619]: I0126 11:12:38.753082 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-mwg4l\" (UID: \"8a2524a2-4811-44c1-9a0d-9050ba69ea1c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mwg4l" Jan 26 11:12:38 crc kubenswrapper[4619]: I0126 11:12:38.753945 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-mwg4l\" (UID: \"8a2524a2-4811-44c1-9a0d-9050ba69ea1c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mwg4l" Jan 26 11:12:38 crc kubenswrapper[4619]: I0126 11:12:38.754019 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-mwg4l\" (UID: \"8a2524a2-4811-44c1-9a0d-9050ba69ea1c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mwg4l" Jan 26 11:12:38 crc kubenswrapper[4619]: I0126 11:12:38.754081 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-mwg4l\" (UID: \"8a2524a2-4811-44c1-9a0d-9050ba69ea1c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mwg4l" Jan 26 11:12:38 crc kubenswrapper[4619]: I0126 11:12:38.754356 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-mwg4l\" (UID: \"8a2524a2-4811-44c1-9a0d-9050ba69ea1c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mwg4l" Jan 26 11:12:38 crc kubenswrapper[4619]: I0126 11:12:38.754485 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-config\") pod \"dnsmasq-dns-74f6bcbc87-mwg4l\" (UID: \"8a2524a2-4811-44c1-9a0d-9050ba69ea1c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mwg4l" Jan 26 11:12:38 crc kubenswrapper[4619]: I0126 11:12:38.770881 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grkqz\" (UniqueName: \"kubernetes.io/projected/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-kube-api-access-grkqz\") pod \"dnsmasq-dns-74f6bcbc87-mwg4l\" (UID: \"8a2524a2-4811-44c1-9a0d-9050ba69ea1c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mwg4l" Jan 26 11:12:38 crc kubenswrapper[4619]: I0126 11:12:38.825779 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-mwg4l" Jan 26 11:12:39 crc kubenswrapper[4619]: I0126 11:12:39.297759 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-mwg4l"] Jan 26 11:12:39 crc kubenswrapper[4619]: W0126 11:12:39.300598 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a2524a2_4811_44c1_9a0d_9050ba69ea1c.slice/crio-e65595e70df6c018157dda417a3f9ad722774b5969db56b33fb587f5be8aa3bb WatchSource:0}: Error finding container e65595e70df6c018157dda417a3f9ad722774b5969db56b33fb587f5be8aa3bb: Status 404 returned error can't find the container with id e65595e70df6c018157dda417a3f9ad722774b5969db56b33fb587f5be8aa3bb Jan 26 11:12:39 crc kubenswrapper[4619]: I0126 11:12:39.990702 4619 generic.go:334] "Generic (PLEG): container finished" podID="8a2524a2-4811-44c1-9a0d-9050ba69ea1c" containerID="4130fd0a4cd1630bb28d83fd6bc01587dbeb442827694f8c05bd27660a69f600" exitCode=0 Jan 26 11:12:39 crc kubenswrapper[4619]: I0126 11:12:39.990822 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-mwg4l" event={"ID":"8a2524a2-4811-44c1-9a0d-9050ba69ea1c","Type":"ContainerDied","Data":"4130fd0a4cd1630bb28d83fd6bc01587dbeb442827694f8c05bd27660a69f600"} Jan 26 11:12:39 crc kubenswrapper[4619]: I0126 11:12:39.991128 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-mwg4l" event={"ID":"8a2524a2-4811-44c1-9a0d-9050ba69ea1c","Type":"ContainerStarted","Data":"e65595e70df6c018157dda417a3f9ad722774b5969db56b33fb587f5be8aa3bb"} Jan 26 11:12:39 crc kubenswrapper[4619]: I0126 11:12:39.991249 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-b6wlt" podUID="5e721746-254e-483a-b19b-cca24a0824d1" containerName="dnsmasq-dns" containerID="cri-o://eea4215bcdb023e917950d60b7a241dfc4060227dd0bbcba0564e9f92f163353" gracePeriod=10 Jan 26 11:12:40 crc kubenswrapper[4619]: I0126 11:12:40.479690 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-b6wlt" Jan 26 11:12:40 crc kubenswrapper[4619]: I0126 11:12:40.582115 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dtpl\" (UniqueName: \"kubernetes.io/projected/5e721746-254e-483a-b19b-cca24a0824d1-kube-api-access-7dtpl\") pod \"5e721746-254e-483a-b19b-cca24a0824d1\" (UID: \"5e721746-254e-483a-b19b-cca24a0824d1\") " Jan 26 11:12:40 crc kubenswrapper[4619]: I0126 11:12:40.582180 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e721746-254e-483a-b19b-cca24a0824d1-ovsdbserver-sb\") pod \"5e721746-254e-483a-b19b-cca24a0824d1\" (UID: \"5e721746-254e-483a-b19b-cca24a0824d1\") " Jan 26 11:12:40 crc kubenswrapper[4619]: I0126 11:12:40.582233 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e721746-254e-483a-b19b-cca24a0824d1-ovsdbserver-nb\") pod \"5e721746-254e-483a-b19b-cca24a0824d1\" (UID: \"5e721746-254e-483a-b19b-cca24a0824d1\") " Jan 26 11:12:40 crc kubenswrapper[4619]: I0126 11:12:40.582323 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e721746-254e-483a-b19b-cca24a0824d1-config\") pod \"5e721746-254e-483a-b19b-cca24a0824d1\" (UID: \"5e721746-254e-483a-b19b-cca24a0824d1\") " Jan 26 11:12:40 crc kubenswrapper[4619]: I0126 11:12:40.582362 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5e721746-254e-483a-b19b-cca24a0824d1-dns-swift-storage-0\") pod \"5e721746-254e-483a-b19b-cca24a0824d1\" (UID: \"5e721746-254e-483a-b19b-cca24a0824d1\") " Jan 26 11:12:40 crc kubenswrapper[4619]: I0126 11:12:40.582431 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e721746-254e-483a-b19b-cca24a0824d1-dns-svc\") pod \"5e721746-254e-483a-b19b-cca24a0824d1\" (UID: \"5e721746-254e-483a-b19b-cca24a0824d1\") " Jan 26 11:12:40 crc kubenswrapper[4619]: I0126 11:12:40.603377 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e721746-254e-483a-b19b-cca24a0824d1-kube-api-access-7dtpl" (OuterVolumeSpecName: "kube-api-access-7dtpl") pod "5e721746-254e-483a-b19b-cca24a0824d1" (UID: "5e721746-254e-483a-b19b-cca24a0824d1"). InnerVolumeSpecName "kube-api-access-7dtpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:12:40 crc kubenswrapper[4619]: I0126 11:12:40.625516 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e721746-254e-483a-b19b-cca24a0824d1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5e721746-254e-483a-b19b-cca24a0824d1" (UID: "5e721746-254e-483a-b19b-cca24a0824d1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:40 crc kubenswrapper[4619]: I0126 11:12:40.631754 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e721746-254e-483a-b19b-cca24a0824d1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5e721746-254e-483a-b19b-cca24a0824d1" (UID: "5e721746-254e-483a-b19b-cca24a0824d1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:40 crc kubenswrapper[4619]: I0126 11:12:40.653985 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e721746-254e-483a-b19b-cca24a0824d1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5e721746-254e-483a-b19b-cca24a0824d1" (UID: "5e721746-254e-483a-b19b-cca24a0824d1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:40 crc kubenswrapper[4619]: I0126 11:12:40.654046 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e721746-254e-483a-b19b-cca24a0824d1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5e721746-254e-483a-b19b-cca24a0824d1" (UID: "5e721746-254e-483a-b19b-cca24a0824d1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:40 crc kubenswrapper[4619]: I0126 11:12:40.667899 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e721746-254e-483a-b19b-cca24a0824d1-config" (OuterVolumeSpecName: "config") pod "5e721746-254e-483a-b19b-cca24a0824d1" (UID: "5e721746-254e-483a-b19b-cca24a0824d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:40 crc kubenswrapper[4619]: I0126 11:12:40.685487 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e721746-254e-483a-b19b-cca24a0824d1-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:40 crc kubenswrapper[4619]: I0126 11:12:40.685520 4619 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5e721746-254e-483a-b19b-cca24a0824d1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:40 crc kubenswrapper[4619]: I0126 11:12:40.685532 4619 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e721746-254e-483a-b19b-cca24a0824d1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:40 crc kubenswrapper[4619]: I0126 11:12:40.685542 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dtpl\" (UniqueName: \"kubernetes.io/projected/5e721746-254e-483a-b19b-cca24a0824d1-kube-api-access-7dtpl\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:40 crc kubenswrapper[4619]: I0126 11:12:40.685551 4619 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e721746-254e-483a-b19b-cca24a0824d1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:40 crc kubenswrapper[4619]: I0126 11:12:40.685560 4619 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e721746-254e-483a-b19b-cca24a0824d1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:40 crc kubenswrapper[4619]: I0126 11:12:40.999714 4619 generic.go:334] "Generic (PLEG): container finished" podID="5e721746-254e-483a-b19b-cca24a0824d1" containerID="eea4215bcdb023e917950d60b7a241dfc4060227dd0bbcba0564e9f92f163353" exitCode=0 Jan 26 11:12:41 crc kubenswrapper[4619]: I0126 11:12:40.999776 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-b6wlt" event={"ID":"5e721746-254e-483a-b19b-cca24a0824d1","Type":"ContainerDied","Data":"eea4215bcdb023e917950d60b7a241dfc4060227dd0bbcba0564e9f92f163353"} Jan 26 11:12:41 crc kubenswrapper[4619]: I0126 11:12:40.999787 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-b6wlt" Jan 26 11:12:41 crc kubenswrapper[4619]: I0126 11:12:40.999812 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-b6wlt" event={"ID":"5e721746-254e-483a-b19b-cca24a0824d1","Type":"ContainerDied","Data":"f1338690b9d4f62a87a85b327f8bd7425b27d08d9d8504c551cd491f40512874"} Jan 26 11:12:41 crc kubenswrapper[4619]: I0126 11:12:40.999843 4619 scope.go:117] "RemoveContainer" containerID="eea4215bcdb023e917950d60b7a241dfc4060227dd0bbcba0564e9f92f163353" Jan 26 11:12:41 crc kubenswrapper[4619]: I0126 11:12:41.003029 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-mwg4l" event={"ID":"8a2524a2-4811-44c1-9a0d-9050ba69ea1c","Type":"ContainerStarted","Data":"2a4b078870c4fac1504af6d5c02c3149efb0f01dfa325b9d995dc89ee3575b33"} Jan 26 11:12:41 crc kubenswrapper[4619]: I0126 11:12:41.003564 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-mwg4l" Jan 26 11:12:41 crc kubenswrapper[4619]: I0126 11:12:41.031752 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-mwg4l" podStartSLOduration=3.031732875 podStartE2EDuration="3.031732875s" podCreationTimestamp="2026-01-26 11:12:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:12:41.024160664 +0000 UTC m=+1060.058201380" watchObservedRunningTime="2026-01-26 11:12:41.031732875 +0000 UTC m=+1060.065773591" Jan 26 11:12:41 crc kubenswrapper[4619]: I0126 11:12:41.032861 4619 scope.go:117] "RemoveContainer" containerID="03ef4283f8b390381085c7490ad5b4147279bd37444bf6e0ea2172b47bfc3caa" Jan 26 11:12:41 crc kubenswrapper[4619]: I0126 11:12:41.051374 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-b6wlt"] Jan 26 11:12:41 crc kubenswrapper[4619]: I0126 11:12:41.058413 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-b6wlt"] Jan 26 11:12:41 crc kubenswrapper[4619]: I0126 11:12:41.064631 4619 scope.go:117] "RemoveContainer" containerID="eea4215bcdb023e917950d60b7a241dfc4060227dd0bbcba0564e9f92f163353" Jan 26 11:12:41 crc kubenswrapper[4619]: E0126 11:12:41.064981 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eea4215bcdb023e917950d60b7a241dfc4060227dd0bbcba0564e9f92f163353\": container with ID starting with eea4215bcdb023e917950d60b7a241dfc4060227dd0bbcba0564e9f92f163353 not found: ID does not exist" containerID="eea4215bcdb023e917950d60b7a241dfc4060227dd0bbcba0564e9f92f163353" Jan 26 11:12:41 crc kubenswrapper[4619]: I0126 11:12:41.065009 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eea4215bcdb023e917950d60b7a241dfc4060227dd0bbcba0564e9f92f163353"} err="failed to get container status \"eea4215bcdb023e917950d60b7a241dfc4060227dd0bbcba0564e9f92f163353\": rpc error: code = NotFound desc = could not find container \"eea4215bcdb023e917950d60b7a241dfc4060227dd0bbcba0564e9f92f163353\": container with ID starting with eea4215bcdb023e917950d60b7a241dfc4060227dd0bbcba0564e9f92f163353 not found: ID does not exist" Jan 26 11:12:41 crc kubenswrapper[4619]: I0126 11:12:41.065031 4619 scope.go:117] "RemoveContainer" containerID="03ef4283f8b390381085c7490ad5b4147279bd37444bf6e0ea2172b47bfc3caa" Jan 26 11:12:41 crc kubenswrapper[4619]: E0126 11:12:41.065565 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03ef4283f8b390381085c7490ad5b4147279bd37444bf6e0ea2172b47bfc3caa\": container with ID starting with 03ef4283f8b390381085c7490ad5b4147279bd37444bf6e0ea2172b47bfc3caa not found: ID does not exist" containerID="03ef4283f8b390381085c7490ad5b4147279bd37444bf6e0ea2172b47bfc3caa" Jan 26 11:12:41 crc kubenswrapper[4619]: I0126 11:12:41.065588 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03ef4283f8b390381085c7490ad5b4147279bd37444bf6e0ea2172b47bfc3caa"} err="failed to get container status \"03ef4283f8b390381085c7490ad5b4147279bd37444bf6e0ea2172b47bfc3caa\": rpc error: code = NotFound desc = could not find container \"03ef4283f8b390381085c7490ad5b4147279bd37444bf6e0ea2172b47bfc3caa\": container with ID starting with 03ef4283f8b390381085c7490ad5b4147279bd37444bf6e0ea2172b47bfc3caa not found: ID does not exist" Jan 26 11:12:41 crc kubenswrapper[4619]: I0126 11:12:41.270694 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e721746-254e-483a-b19b-cca24a0824d1" path="/var/lib/kubelet/pods/5e721746-254e-483a-b19b-cca24a0824d1/volumes" Jan 26 11:12:42 crc kubenswrapper[4619]: I0126 11:12:42.013563 4619 generic.go:334] "Generic (PLEG): container finished" podID="d8fa96fb-5f34-4f6c-932b-14420024f02d" containerID="20b1759c869725f0dcb69fe39834fbb9b37c82a13f1ace62b905491a934a6013" exitCode=0 Jan 26 11:12:42 crc kubenswrapper[4619]: I0126 11:12:42.014600 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-74bc6" event={"ID":"d8fa96fb-5f34-4f6c-932b-14420024f02d","Type":"ContainerDied","Data":"20b1759c869725f0dcb69fe39834fbb9b37c82a13f1ace62b905491a934a6013"} Jan 26 11:12:43 crc kubenswrapper[4619]: I0126 11:12:43.428760 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-74bc6" Jan 26 11:12:43 crc kubenswrapper[4619]: I0126 11:12:43.529730 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8fa96fb-5f34-4f6c-932b-14420024f02d-config-data\") pod \"d8fa96fb-5f34-4f6c-932b-14420024f02d\" (UID: \"d8fa96fb-5f34-4f6c-932b-14420024f02d\") " Jan 26 11:12:43 crc kubenswrapper[4619]: I0126 11:12:43.529789 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68wtb\" (UniqueName: \"kubernetes.io/projected/d8fa96fb-5f34-4f6c-932b-14420024f02d-kube-api-access-68wtb\") pod \"d8fa96fb-5f34-4f6c-932b-14420024f02d\" (UID: \"d8fa96fb-5f34-4f6c-932b-14420024f02d\") " Jan 26 11:12:43 crc kubenswrapper[4619]: I0126 11:12:43.529987 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8fa96fb-5f34-4f6c-932b-14420024f02d-combined-ca-bundle\") pod \"d8fa96fb-5f34-4f6c-932b-14420024f02d\" (UID: \"d8fa96fb-5f34-4f6c-932b-14420024f02d\") " Jan 26 11:12:43 crc kubenswrapper[4619]: I0126 11:12:43.535087 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8fa96fb-5f34-4f6c-932b-14420024f02d-kube-api-access-68wtb" (OuterVolumeSpecName: "kube-api-access-68wtb") pod "d8fa96fb-5f34-4f6c-932b-14420024f02d" (UID: "d8fa96fb-5f34-4f6c-932b-14420024f02d"). InnerVolumeSpecName "kube-api-access-68wtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:12:43 crc kubenswrapper[4619]: I0126 11:12:43.560113 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8fa96fb-5f34-4f6c-932b-14420024f02d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8fa96fb-5f34-4f6c-932b-14420024f02d" (UID: "d8fa96fb-5f34-4f6c-932b-14420024f02d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:12:43 crc kubenswrapper[4619]: I0126 11:12:43.582822 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8fa96fb-5f34-4f6c-932b-14420024f02d-config-data" (OuterVolumeSpecName: "config-data") pod "d8fa96fb-5f34-4f6c-932b-14420024f02d" (UID: "d8fa96fb-5f34-4f6c-932b-14420024f02d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:12:43 crc kubenswrapper[4619]: I0126 11:12:43.631424 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8fa96fb-5f34-4f6c-932b-14420024f02d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:43 crc kubenswrapper[4619]: I0126 11:12:43.631470 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8fa96fb-5f34-4f6c-932b-14420024f02d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:43 crc kubenswrapper[4619]: I0126 11:12:43.631483 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68wtb\" (UniqueName: \"kubernetes.io/projected/d8fa96fb-5f34-4f6c-932b-14420024f02d-kube-api-access-68wtb\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.032516 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-74bc6" event={"ID":"d8fa96fb-5f34-4f6c-932b-14420024f02d","Type":"ContainerDied","Data":"d550b54268d14f1d8a229421029a9b9f32dfa39e1c86d4011d50aef979cd5965"} Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.032589 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d550b54268d14f1d8a229421029a9b9f32dfa39e1c86d4011d50aef979cd5965" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.032652 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-74bc6" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.224207 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-mwg4l"] Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.224420 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-mwg4l" podUID="8a2524a2-4811-44c1-9a0d-9050ba69ea1c" containerName="dnsmasq-dns" containerID="cri-o://2a4b078870c4fac1504af6d5c02c3149efb0f01dfa325b9d995dc89ee3575b33" gracePeriod=10 Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.238728 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-p74bp"] Jan 26 11:12:44 crc kubenswrapper[4619]: E0126 11:12:44.239025 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e721746-254e-483a-b19b-cca24a0824d1" containerName="dnsmasq-dns" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.239041 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e721746-254e-483a-b19b-cca24a0824d1" containerName="dnsmasq-dns" Jan 26 11:12:44 crc kubenswrapper[4619]: E0126 11:12:44.239049 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e721746-254e-483a-b19b-cca24a0824d1" containerName="init" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.239057 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e721746-254e-483a-b19b-cca24a0824d1" containerName="init" Jan 26 11:12:44 crc kubenswrapper[4619]: E0126 11:12:44.239070 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8fa96fb-5f34-4f6c-932b-14420024f02d" containerName="keystone-db-sync" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.239075 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8fa96fb-5f34-4f6c-932b-14420024f02d" containerName="keystone-db-sync" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.239231 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8fa96fb-5f34-4f6c-932b-14420024f02d" containerName="keystone-db-sync" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.239243 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e721746-254e-483a-b19b-cca24a0824d1" containerName="dnsmasq-dns" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.239716 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p74bp" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.248181 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.248354 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.248399 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.248564 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.256102 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2hx4x" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.258332 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-p74bp"] Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.297670 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-djqh6"] Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.298932 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-djqh6" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.335603 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-djqh6"] Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.342479 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3823a07-f914-4121-b3cd-2f3b4a272480-config-data\") pod \"keystone-bootstrap-p74bp\" (UID: \"e3823a07-f914-4121-b3cd-2f3b4a272480\") " pod="openstack/keystone-bootstrap-p74bp" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.342549 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfhc7\" (UniqueName: \"kubernetes.io/projected/e3823a07-f914-4121-b3cd-2f3b4a272480-kube-api-access-vfhc7\") pod \"keystone-bootstrap-p74bp\" (UID: \"e3823a07-f914-4121-b3cd-2f3b4a272480\") " pod="openstack/keystone-bootstrap-p74bp" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.342568 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3823a07-f914-4121-b3cd-2f3b4a272480-scripts\") pod \"keystone-bootstrap-p74bp\" (UID: \"e3823a07-f914-4121-b3cd-2f3b4a272480\") " pod="openstack/keystone-bootstrap-p74bp" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.342582 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3823a07-f914-4121-b3cd-2f3b4a272480-fernet-keys\") pod \"keystone-bootstrap-p74bp\" (UID: \"e3823a07-f914-4121-b3cd-2f3b4a272480\") " pod="openstack/keystone-bootstrap-p74bp" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.342608 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e3823a07-f914-4121-b3cd-2f3b4a272480-credential-keys\") pod \"keystone-bootstrap-p74bp\" (UID: \"e3823a07-f914-4121-b3cd-2f3b4a272480\") " pod="openstack/keystone-bootstrap-p74bp" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.342676 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3823a07-f914-4121-b3cd-2f3b4a272480-combined-ca-bundle\") pod \"keystone-bootstrap-p74bp\" (UID: \"e3823a07-f914-4121-b3cd-2f3b4a272480\") " pod="openstack/keystone-bootstrap-p74bp" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.447567 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvvw8\" (UniqueName: \"kubernetes.io/projected/dceeb899-8622-413b-ac65-ac5e4a51ac8e-kube-api-access-cvvw8\") pod \"dnsmasq-dns-847c4cc679-djqh6\" (UID: \"dceeb899-8622-413b-ac65-ac5e4a51ac8e\") " pod="openstack/dnsmasq-dns-847c4cc679-djqh6" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.447675 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dceeb899-8622-413b-ac65-ac5e4a51ac8e-config\") pod \"dnsmasq-dns-847c4cc679-djqh6\" (UID: \"dceeb899-8622-413b-ac65-ac5e4a51ac8e\") " pod="openstack/dnsmasq-dns-847c4cc679-djqh6" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.447707 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3823a07-f914-4121-b3cd-2f3b4a272480-combined-ca-bundle\") pod \"keystone-bootstrap-p74bp\" (UID: \"e3823a07-f914-4121-b3cd-2f3b4a272480\") " pod="openstack/keystone-bootstrap-p74bp" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.447746 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dceeb899-8622-413b-ac65-ac5e4a51ac8e-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-djqh6\" (UID: \"dceeb899-8622-413b-ac65-ac5e4a51ac8e\") " pod="openstack/dnsmasq-dns-847c4cc679-djqh6" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.447770 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dceeb899-8622-413b-ac65-ac5e4a51ac8e-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-djqh6\" (UID: \"dceeb899-8622-413b-ac65-ac5e4a51ac8e\") " pod="openstack/dnsmasq-dns-847c4cc679-djqh6" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.447823 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dceeb899-8622-413b-ac65-ac5e4a51ac8e-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-djqh6\" (UID: \"dceeb899-8622-413b-ac65-ac5e4a51ac8e\") " pod="openstack/dnsmasq-dns-847c4cc679-djqh6" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.447848 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dceeb899-8622-413b-ac65-ac5e4a51ac8e-dns-svc\") pod \"dnsmasq-dns-847c4cc679-djqh6\" (UID: \"dceeb899-8622-413b-ac65-ac5e4a51ac8e\") " pod="openstack/dnsmasq-dns-847c4cc679-djqh6" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.447906 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3823a07-f914-4121-b3cd-2f3b4a272480-config-data\") pod \"keystone-bootstrap-p74bp\" (UID: \"e3823a07-f914-4121-b3cd-2f3b4a272480\") " pod="openstack/keystone-bootstrap-p74bp" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.447929 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfhc7\" (UniqueName: \"kubernetes.io/projected/e3823a07-f914-4121-b3cd-2f3b4a272480-kube-api-access-vfhc7\") pod \"keystone-bootstrap-p74bp\" (UID: \"e3823a07-f914-4121-b3cd-2f3b4a272480\") " pod="openstack/keystone-bootstrap-p74bp" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.447949 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3823a07-f914-4121-b3cd-2f3b4a272480-scripts\") pod \"keystone-bootstrap-p74bp\" (UID: \"e3823a07-f914-4121-b3cd-2f3b4a272480\") " pod="openstack/keystone-bootstrap-p74bp" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.447970 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3823a07-f914-4121-b3cd-2f3b4a272480-fernet-keys\") pod \"keystone-bootstrap-p74bp\" (UID: \"e3823a07-f914-4121-b3cd-2f3b4a272480\") " pod="openstack/keystone-bootstrap-p74bp" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.447992 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e3823a07-f914-4121-b3cd-2f3b4a272480-credential-keys\") pod \"keystone-bootstrap-p74bp\" (UID: \"e3823a07-f914-4121-b3cd-2f3b4a272480\") " pod="openstack/keystone-bootstrap-p74bp" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.462833 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3823a07-f914-4121-b3cd-2f3b4a272480-combined-ca-bundle\") pod \"keystone-bootstrap-p74bp\" (UID: \"e3823a07-f914-4121-b3cd-2f3b4a272480\") " pod="openstack/keystone-bootstrap-p74bp" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.476474 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3823a07-f914-4121-b3cd-2f3b4a272480-scripts\") pod \"keystone-bootstrap-p74bp\" (UID: \"e3823a07-f914-4121-b3cd-2f3b4a272480\") " pod="openstack/keystone-bootstrap-p74bp" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.478067 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e3823a07-f914-4121-b3cd-2f3b4a272480-credential-keys\") pod \"keystone-bootstrap-p74bp\" (UID: \"e3823a07-f914-4121-b3cd-2f3b4a272480\") " pod="openstack/keystone-bootstrap-p74bp" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.482581 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3823a07-f914-4121-b3cd-2f3b4a272480-config-data\") pod \"keystone-bootstrap-p74bp\" (UID: \"e3823a07-f914-4121-b3cd-2f3b4a272480\") " pod="openstack/keystone-bootstrap-p74bp" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.483273 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3823a07-f914-4121-b3cd-2f3b4a272480-fernet-keys\") pod \"keystone-bootstrap-p74bp\" (UID: \"e3823a07-f914-4121-b3cd-2f3b4a272480\") " pod="openstack/keystone-bootstrap-p74bp" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.501947 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfhc7\" (UniqueName: \"kubernetes.io/projected/e3823a07-f914-4121-b3cd-2f3b4a272480-kube-api-access-vfhc7\") pod \"keystone-bootstrap-p74bp\" (UID: \"e3823a07-f914-4121-b3cd-2f3b4a272480\") " pod="openstack/keystone-bootstrap-p74bp" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.547347 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-7zn9h"] Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.548417 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7zn9h" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.550330 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvvw8\" (UniqueName: \"kubernetes.io/projected/dceeb899-8622-413b-ac65-ac5e4a51ac8e-kube-api-access-cvvw8\") pod \"dnsmasq-dns-847c4cc679-djqh6\" (UID: \"dceeb899-8622-413b-ac65-ac5e4a51ac8e\") " pod="openstack/dnsmasq-dns-847c4cc679-djqh6" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.550386 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dceeb899-8622-413b-ac65-ac5e4a51ac8e-config\") pod \"dnsmasq-dns-847c4cc679-djqh6\" (UID: \"dceeb899-8622-413b-ac65-ac5e4a51ac8e\") " pod="openstack/dnsmasq-dns-847c4cc679-djqh6" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.550450 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dceeb899-8622-413b-ac65-ac5e4a51ac8e-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-djqh6\" (UID: \"dceeb899-8622-413b-ac65-ac5e4a51ac8e\") " pod="openstack/dnsmasq-dns-847c4cc679-djqh6" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.550475 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dceeb899-8622-413b-ac65-ac5e4a51ac8e-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-djqh6\" (UID: \"dceeb899-8622-413b-ac65-ac5e4a51ac8e\") " pod="openstack/dnsmasq-dns-847c4cc679-djqh6" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.550526 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dceeb899-8622-413b-ac65-ac5e4a51ac8e-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-djqh6\" (UID: \"dceeb899-8622-413b-ac65-ac5e4a51ac8e\") " pod="openstack/dnsmasq-dns-847c4cc679-djqh6" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.550552 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dceeb899-8622-413b-ac65-ac5e4a51ac8e-dns-svc\") pod \"dnsmasq-dns-847c4cc679-djqh6\" (UID: \"dceeb899-8622-413b-ac65-ac5e4a51ac8e\") " pod="openstack/dnsmasq-dns-847c4cc679-djqh6" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.551353 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dceeb899-8622-413b-ac65-ac5e4a51ac8e-dns-svc\") pod \"dnsmasq-dns-847c4cc679-djqh6\" (UID: \"dceeb899-8622-413b-ac65-ac5e4a51ac8e\") " pod="openstack/dnsmasq-dns-847c4cc679-djqh6" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.552057 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dceeb899-8622-413b-ac65-ac5e4a51ac8e-config\") pod \"dnsmasq-dns-847c4cc679-djqh6\" (UID: \"dceeb899-8622-413b-ac65-ac5e4a51ac8e\") " pod="openstack/dnsmasq-dns-847c4cc679-djqh6" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.552549 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dceeb899-8622-413b-ac65-ac5e4a51ac8e-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-djqh6\" (UID: \"dceeb899-8622-413b-ac65-ac5e4a51ac8e\") " pod="openstack/dnsmasq-dns-847c4cc679-djqh6" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.553074 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dceeb899-8622-413b-ac65-ac5e4a51ac8e-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-djqh6\" (UID: \"dceeb899-8622-413b-ac65-ac5e4a51ac8e\") " pod="openstack/dnsmasq-dns-847c4cc679-djqh6" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.553570 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dceeb899-8622-413b-ac65-ac5e4a51ac8e-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-djqh6\" (UID: \"dceeb899-8622-413b-ac65-ac5e4a51ac8e\") " pod="openstack/dnsmasq-dns-847c4cc679-djqh6" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.562115 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p74bp" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.584104 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.584678 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.584792 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-bthxz" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.593895 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7zn9h"] Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.612375 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvvw8\" (UniqueName: \"kubernetes.io/projected/dceeb899-8622-413b-ac65-ac5e4a51ac8e-kube-api-access-cvvw8\") pod \"dnsmasq-dns-847c4cc679-djqh6\" (UID: \"dceeb899-8622-413b-ac65-ac5e4a51ac8e\") " pod="openstack/dnsmasq-dns-847c4cc679-djqh6" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.618510 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-djqh6" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.661889 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqt8q\" (UniqueName: \"kubernetes.io/projected/42f56f30-76de-408b-bbe1-8ef2b764f26b-kube-api-access-sqt8q\") pod \"cinder-db-sync-7zn9h\" (UID: \"42f56f30-76de-408b-bbe1-8ef2b764f26b\") " pod="openstack/cinder-db-sync-7zn9h" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.661926 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42f56f30-76de-408b-bbe1-8ef2b764f26b-etc-machine-id\") pod \"cinder-db-sync-7zn9h\" (UID: \"42f56f30-76de-408b-bbe1-8ef2b764f26b\") " pod="openstack/cinder-db-sync-7zn9h" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.661957 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f56f30-76de-408b-bbe1-8ef2b764f26b-combined-ca-bundle\") pod \"cinder-db-sync-7zn9h\" (UID: \"42f56f30-76de-408b-bbe1-8ef2b764f26b\") " pod="openstack/cinder-db-sync-7zn9h" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.661974 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42f56f30-76de-408b-bbe1-8ef2b764f26b-scripts\") pod \"cinder-db-sync-7zn9h\" (UID: \"42f56f30-76de-408b-bbe1-8ef2b764f26b\") " pod="openstack/cinder-db-sync-7zn9h" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.662009 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42f56f30-76de-408b-bbe1-8ef2b764f26b-config-data\") pod \"cinder-db-sync-7zn9h\" (UID: \"42f56f30-76de-408b-bbe1-8ef2b764f26b\") " pod="openstack/cinder-db-sync-7zn9h" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.662047 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/42f56f30-76de-408b-bbe1-8ef2b764f26b-db-sync-config-data\") pod \"cinder-db-sync-7zn9h\" (UID: \"42f56f30-76de-408b-bbe1-8ef2b764f26b\") " pod="openstack/cinder-db-sync-7zn9h" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.709061 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-64h92"] Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.715804 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-64h92" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.727252 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-66pxq" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.727434 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.734446 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.764581 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-64h92"] Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.765380 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/42f56f30-76de-408b-bbe1-8ef2b764f26b-db-sync-config-data\") pod \"cinder-db-sync-7zn9h\" (UID: \"42f56f30-76de-408b-bbe1-8ef2b764f26b\") " pod="openstack/cinder-db-sync-7zn9h" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.765479 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqt8q\" (UniqueName: \"kubernetes.io/projected/42f56f30-76de-408b-bbe1-8ef2b764f26b-kube-api-access-sqt8q\") pod \"cinder-db-sync-7zn9h\" (UID: \"42f56f30-76de-408b-bbe1-8ef2b764f26b\") " pod="openstack/cinder-db-sync-7zn9h" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.765502 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42f56f30-76de-408b-bbe1-8ef2b764f26b-etc-machine-id\") pod \"cinder-db-sync-7zn9h\" (UID: \"42f56f30-76de-408b-bbe1-8ef2b764f26b\") " pod="openstack/cinder-db-sync-7zn9h" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.765528 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f56f30-76de-408b-bbe1-8ef2b764f26b-combined-ca-bundle\") pod \"cinder-db-sync-7zn9h\" (UID: \"42f56f30-76de-408b-bbe1-8ef2b764f26b\") " pod="openstack/cinder-db-sync-7zn9h" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.765543 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42f56f30-76de-408b-bbe1-8ef2b764f26b-scripts\") pod \"cinder-db-sync-7zn9h\" (UID: \"42f56f30-76de-408b-bbe1-8ef2b764f26b\") " pod="openstack/cinder-db-sync-7zn9h" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.765571 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42f56f30-76de-408b-bbe1-8ef2b764f26b-config-data\") pod \"cinder-db-sync-7zn9h\" (UID: \"42f56f30-76de-408b-bbe1-8ef2b764f26b\") " pod="openstack/cinder-db-sync-7zn9h" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.770738 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42f56f30-76de-408b-bbe1-8ef2b764f26b-etc-machine-id\") pod \"cinder-db-sync-7zn9h\" (UID: \"42f56f30-76de-408b-bbe1-8ef2b764f26b\") " pod="openstack/cinder-db-sync-7zn9h" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.785043 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42f56f30-76de-408b-bbe1-8ef2b764f26b-config-data\") pod \"cinder-db-sync-7zn9h\" (UID: \"42f56f30-76de-408b-bbe1-8ef2b764f26b\") " pod="openstack/cinder-db-sync-7zn9h" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.789783 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.790192 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/42f56f30-76de-408b-bbe1-8ef2b764f26b-db-sync-config-data\") pod \"cinder-db-sync-7zn9h\" (UID: \"42f56f30-76de-408b-bbe1-8ef2b764f26b\") " pod="openstack/cinder-db-sync-7zn9h" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.814928 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f56f30-76de-408b-bbe1-8ef2b764f26b-combined-ca-bundle\") pod \"cinder-db-sync-7zn9h\" (UID: \"42f56f30-76de-408b-bbe1-8ef2b764f26b\") " pod="openstack/cinder-db-sync-7zn9h" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.832813 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42f56f30-76de-408b-bbe1-8ef2b764f26b-scripts\") pod \"cinder-db-sync-7zn9h\" (UID: \"42f56f30-76de-408b-bbe1-8ef2b764f26b\") " pod="openstack/cinder-db-sync-7zn9h" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.840286 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.844182 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.861145 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.867904 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a59562e9-8459-4c22-a737-f6bde480fc2b-run-httpd\") pod \"ceilometer-0\" (UID: \"a59562e9-8459-4c22-a737-f6bde480fc2b\") " pod="openstack/ceilometer-0" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.867950 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a59562e9-8459-4c22-a737-f6bde480fc2b-scripts\") pod \"ceilometer-0\" (UID: \"a59562e9-8459-4c22-a737-f6bde480fc2b\") " pod="openstack/ceilometer-0" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.868003 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/76fffa56-d701-41ca-8a74-4c72015701f4-config\") pod \"neutron-db-sync-64h92\" (UID: \"76fffa56-d701-41ca-8a74-4c72015701f4\") " pod="openstack/neutron-db-sync-64h92" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.868032 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a59562e9-8459-4c22-a737-f6bde480fc2b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a59562e9-8459-4c22-a737-f6bde480fc2b\") " pod="openstack/ceilometer-0" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.868061 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjvwq\" (UniqueName: \"kubernetes.io/projected/a59562e9-8459-4c22-a737-f6bde480fc2b-kube-api-access-rjvwq\") pod \"ceilometer-0\" (UID: \"a59562e9-8459-4c22-a737-f6bde480fc2b\") " pod="openstack/ceilometer-0" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.868126 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a59562e9-8459-4c22-a737-f6bde480fc2b-log-httpd\") pod \"ceilometer-0\" (UID: \"a59562e9-8459-4c22-a737-f6bde480fc2b\") " pod="openstack/ceilometer-0" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.868154 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a59562e9-8459-4c22-a737-f6bde480fc2b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a59562e9-8459-4c22-a737-f6bde480fc2b\") " pod="openstack/ceilometer-0" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.868177 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a59562e9-8459-4c22-a737-f6bde480fc2b-config-data\") pod \"ceilometer-0\" (UID: \"a59562e9-8459-4c22-a737-f6bde480fc2b\") " pod="openstack/ceilometer-0" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.868799 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76fffa56-d701-41ca-8a74-4c72015701f4-combined-ca-bundle\") pod \"neutron-db-sync-64h92\" (UID: \"76fffa56-d701-41ca-8a74-4c72015701f4\") " pod="openstack/neutron-db-sync-64h92" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.868856 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-788zc\" (UniqueName: \"kubernetes.io/projected/76fffa56-d701-41ca-8a74-4c72015701f4-kube-api-access-788zc\") pod \"neutron-db-sync-64h92\" (UID: \"76fffa56-d701-41ca-8a74-4c72015701f4\") " pod="openstack/neutron-db-sync-64h92" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.977890 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a59562e9-8459-4c22-a737-f6bde480fc2b-run-httpd\") pod \"ceilometer-0\" (UID: \"a59562e9-8459-4c22-a737-f6bde480fc2b\") " pod="openstack/ceilometer-0" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.977935 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a59562e9-8459-4c22-a737-f6bde480fc2b-scripts\") pod \"ceilometer-0\" (UID: \"a59562e9-8459-4c22-a737-f6bde480fc2b\") " pod="openstack/ceilometer-0" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.977957 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/76fffa56-d701-41ca-8a74-4c72015701f4-config\") pod \"neutron-db-sync-64h92\" (UID: \"76fffa56-d701-41ca-8a74-4c72015701f4\") " pod="openstack/neutron-db-sync-64h92" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.977976 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a59562e9-8459-4c22-a737-f6bde480fc2b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a59562e9-8459-4c22-a737-f6bde480fc2b\") " pod="openstack/ceilometer-0" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.978003 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjvwq\" (UniqueName: \"kubernetes.io/projected/a59562e9-8459-4c22-a737-f6bde480fc2b-kube-api-access-rjvwq\") pod \"ceilometer-0\" (UID: \"a59562e9-8459-4c22-a737-f6bde480fc2b\") " pod="openstack/ceilometer-0" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.978050 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a59562e9-8459-4c22-a737-f6bde480fc2b-log-httpd\") pod \"ceilometer-0\" (UID: \"a59562e9-8459-4c22-a737-f6bde480fc2b\") " pod="openstack/ceilometer-0" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.978072 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a59562e9-8459-4c22-a737-f6bde480fc2b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a59562e9-8459-4c22-a737-f6bde480fc2b\") " pod="openstack/ceilometer-0" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.978099 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a59562e9-8459-4c22-a737-f6bde480fc2b-config-data\") pod \"ceilometer-0\" (UID: \"a59562e9-8459-4c22-a737-f6bde480fc2b\") " pod="openstack/ceilometer-0" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.978160 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76fffa56-d701-41ca-8a74-4c72015701f4-combined-ca-bundle\") pod \"neutron-db-sync-64h92\" (UID: \"76fffa56-d701-41ca-8a74-4c72015701f4\") " pod="openstack/neutron-db-sync-64h92" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.978210 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-788zc\" (UniqueName: \"kubernetes.io/projected/76fffa56-d701-41ca-8a74-4c72015701f4-kube-api-access-788zc\") pod \"neutron-db-sync-64h92\" (UID: \"76fffa56-d701-41ca-8a74-4c72015701f4\") " pod="openstack/neutron-db-sync-64h92" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.979140 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a59562e9-8459-4c22-a737-f6bde480fc2b-run-httpd\") pod \"ceilometer-0\" (UID: \"a59562e9-8459-4c22-a737-f6bde480fc2b\") " pod="openstack/ceilometer-0" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.989073 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqt8q\" (UniqueName: \"kubernetes.io/projected/42f56f30-76de-408b-bbe1-8ef2b764f26b-kube-api-access-sqt8q\") pod \"cinder-db-sync-7zn9h\" (UID: \"42f56f30-76de-408b-bbe1-8ef2b764f26b\") " pod="openstack/cinder-db-sync-7zn9h" Jan 26 11:12:44 crc kubenswrapper[4619]: I0126 11:12:44.989535 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a59562e9-8459-4c22-a737-f6bde480fc2b-log-httpd\") pod \"ceilometer-0\" (UID: \"a59562e9-8459-4c22-a737-f6bde480fc2b\") " pod="openstack/ceilometer-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.005332 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76fffa56-d701-41ca-8a74-4c72015701f4-combined-ca-bundle\") pod \"neutron-db-sync-64h92\" (UID: \"76fffa56-d701-41ca-8a74-4c72015701f4\") " pod="openstack/neutron-db-sync-64h92" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.006680 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a59562e9-8459-4c22-a737-f6bde480fc2b-scripts\") pod \"ceilometer-0\" (UID: \"a59562e9-8459-4c22-a737-f6bde480fc2b\") " pod="openstack/ceilometer-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.011172 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a59562e9-8459-4c22-a737-f6bde480fc2b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a59562e9-8459-4c22-a737-f6bde480fc2b\") " pod="openstack/ceilometer-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.011448 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/76fffa56-d701-41ca-8a74-4c72015701f4-config\") pod \"neutron-db-sync-64h92\" (UID: \"76fffa56-d701-41ca-8a74-4c72015701f4\") " pod="openstack/neutron-db-sync-64h92" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.011571 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a59562e9-8459-4c22-a737-f6bde480fc2b-config-data\") pod \"ceilometer-0\" (UID: \"a59562e9-8459-4c22-a737-f6bde480fc2b\") " pod="openstack/ceilometer-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.013183 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.026755 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a59562e9-8459-4c22-a737-f6bde480fc2b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a59562e9-8459-4c22-a737-f6bde480fc2b\") " pod="openstack/ceilometer-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.039150 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7zn9h" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.073287 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-58f6c6c467-v5nzg"] Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.074758 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58f6c6c467-v5nzg" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.077331 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjvwq\" (UniqueName: \"kubernetes.io/projected/a59562e9-8459-4c22-a737-f6bde480fc2b-kube-api-access-rjvwq\") pod \"ceilometer-0\" (UID: \"a59562e9-8459-4c22-a737-f6bde480fc2b\") " pod="openstack/ceilometer-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.101939 4619 generic.go:334] "Generic (PLEG): container finished" podID="8a2524a2-4811-44c1-9a0d-9050ba69ea1c" containerID="2a4b078870c4fac1504af6d5c02c3149efb0f01dfa325b9d995dc89ee3575b33" exitCode=0 Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.104819 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-t9pt9" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.105063 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-58f6c6c467-v5nzg"] Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.136385 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-mwg4l" event={"ID":"8a2524a2-4811-44c1-9a0d-9050ba69ea1c","Type":"ContainerDied","Data":"2a4b078870c4fac1504af6d5c02c3149efb0f01dfa325b9d995dc89ee3575b33"} Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.136491 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-s54b4"] Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.137871 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-s54b4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.104872 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.105011 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.105037 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.142902 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-788zc\" (UniqueName: \"kubernetes.io/projected/76fffa56-d701-41ca-8a74-4c72015701f4-kube-api-access-788zc\") pod \"neutron-db-sync-64h92\" (UID: \"76fffa56-d701-41ca-8a74-4c72015701f4\") " pod="openstack/neutron-db-sync-64h92" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.191011 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.192143 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-f7nh4"] Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.193123 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-f7nh4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.193836 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ef5cd80-2bd8-445c-bb69-79c53f8c888d-logs\") pod \"horizon-58f6c6c467-v5nzg\" (UID: \"4ef5cd80-2bd8-445c-bb69-79c53f8c888d\") " pod="openstack/horizon-58f6c6c467-v5nzg" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.193879 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/74480c10-4eeb-4a66-99a0-82c49acf75d4-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-s54b4\" (UID: \"74480c10-4eeb-4a66-99a0-82c49acf75d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s54b4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.193934 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74480c10-4eeb-4a66-99a0-82c49acf75d4-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-s54b4\" (UID: \"74480c10-4eeb-4a66-99a0-82c49acf75d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s54b4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.194027 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ef5cd80-2bd8-445c-bb69-79c53f8c888d-config-data\") pod \"horizon-58f6c6c467-v5nzg\" (UID: \"4ef5cd80-2bd8-445c-bb69-79c53f8c888d\") " pod="openstack/horizon-58f6c6c467-v5nzg" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.194159 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74480c10-4eeb-4a66-99a0-82c49acf75d4-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-s54b4\" (UID: \"74480c10-4eeb-4a66-99a0-82c49acf75d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s54b4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.194244 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4ef5cd80-2bd8-445c-bb69-79c53f8c888d-horizon-secret-key\") pod \"horizon-58f6c6c467-v5nzg\" (UID: \"4ef5cd80-2bd8-445c-bb69-79c53f8c888d\") " pod="openstack/horizon-58f6c6c467-v5nzg" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.194280 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74480c10-4eeb-4a66-99a0-82c49acf75d4-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-s54b4\" (UID: \"74480c10-4eeb-4a66-99a0-82c49acf75d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s54b4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.194436 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ef5cd80-2bd8-445c-bb69-79c53f8c888d-scripts\") pod \"horizon-58f6c6c467-v5nzg\" (UID: \"4ef5cd80-2bd8-445c-bb69-79c53f8c888d\") " pod="openstack/horizon-58f6c6c467-v5nzg" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.194495 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsxlt\" (UniqueName: \"kubernetes.io/projected/4ef5cd80-2bd8-445c-bb69-79c53f8c888d-kube-api-access-hsxlt\") pod \"horizon-58f6c6c467-v5nzg\" (UID: \"4ef5cd80-2bd8-445c-bb69-79c53f8c888d\") " pod="openstack/horizon-58f6c6c467-v5nzg" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.194555 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74480c10-4eeb-4a66-99a0-82c49acf75d4-config\") pod \"dnsmasq-dns-785d8bcb8c-s54b4\" (UID: \"74480c10-4eeb-4a66-99a0-82c49acf75d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s54b4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.194592 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzxkg\" (UniqueName: \"kubernetes.io/projected/74480c10-4eeb-4a66-99a0-82c49acf75d4-kube-api-access-hzxkg\") pod \"dnsmasq-dns-785d8bcb8c-s54b4\" (UID: \"74480c10-4eeb-4a66-99a0-82c49acf75d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s54b4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.227065 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-djqh6"] Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.246159 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.246353 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-dcfmk" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.246470 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.253511 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-vmzjm"] Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.254531 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vmzjm" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.297361 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ef5cd80-2bd8-445c-bb69-79c53f8c888d-scripts\") pod \"horizon-58f6c6c467-v5nzg\" (UID: \"4ef5cd80-2bd8-445c-bb69-79c53f8c888d\") " pod="openstack/horizon-58f6c6c467-v5nzg" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.297566 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsxlt\" (UniqueName: \"kubernetes.io/projected/4ef5cd80-2bd8-445c-bb69-79c53f8c888d-kube-api-access-hsxlt\") pod \"horizon-58f6c6c467-v5nzg\" (UID: \"4ef5cd80-2bd8-445c-bb69-79c53f8c888d\") " pod="openstack/horizon-58f6c6c467-v5nzg" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.297676 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74480c10-4eeb-4a66-99a0-82c49acf75d4-config\") pod \"dnsmasq-dns-785d8bcb8c-s54b4\" (UID: \"74480c10-4eeb-4a66-99a0-82c49acf75d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s54b4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.297697 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzxkg\" (UniqueName: \"kubernetes.io/projected/74480c10-4eeb-4a66-99a0-82c49acf75d4-kube-api-access-hzxkg\") pod \"dnsmasq-dns-785d8bcb8c-s54b4\" (UID: \"74480c10-4eeb-4a66-99a0-82c49acf75d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s54b4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.297748 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwktb\" (UniqueName: \"kubernetes.io/projected/9d5b79a0-cb51-483b-96b0-8f85b385692c-kube-api-access-lwktb\") pod \"placement-db-sync-f7nh4\" (UID: \"9d5b79a0-cb51-483b-96b0-8f85b385692c\") " pod="openstack/placement-db-sync-f7nh4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.297767 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d5b79a0-cb51-483b-96b0-8f85b385692c-combined-ca-bundle\") pod \"placement-db-sync-f7nh4\" (UID: \"9d5b79a0-cb51-483b-96b0-8f85b385692c\") " pod="openstack/placement-db-sync-f7nh4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.297783 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ef5cd80-2bd8-445c-bb69-79c53f8c888d-logs\") pod \"horizon-58f6c6c467-v5nzg\" (UID: \"4ef5cd80-2bd8-445c-bb69-79c53f8c888d\") " pod="openstack/horizon-58f6c6c467-v5nzg" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.297802 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d5b79a0-cb51-483b-96b0-8f85b385692c-config-data\") pod \"placement-db-sync-f7nh4\" (UID: \"9d5b79a0-cb51-483b-96b0-8f85b385692c\") " pod="openstack/placement-db-sync-f7nh4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.297832 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/74480c10-4eeb-4a66-99a0-82c49acf75d4-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-s54b4\" (UID: \"74480c10-4eeb-4a66-99a0-82c49acf75d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s54b4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.297897 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74480c10-4eeb-4a66-99a0-82c49acf75d4-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-s54b4\" (UID: \"74480c10-4eeb-4a66-99a0-82c49acf75d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s54b4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.297911 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d5b79a0-cb51-483b-96b0-8f85b385692c-logs\") pod \"placement-db-sync-f7nh4\" (UID: \"9d5b79a0-cb51-483b-96b0-8f85b385692c\") " pod="openstack/placement-db-sync-f7nh4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.297930 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ef5cd80-2bd8-445c-bb69-79c53f8c888d-config-data\") pod \"horizon-58f6c6c467-v5nzg\" (UID: \"4ef5cd80-2bd8-445c-bb69-79c53f8c888d\") " pod="openstack/horizon-58f6c6c467-v5nzg" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.297957 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74480c10-4eeb-4a66-99a0-82c49acf75d4-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-s54b4\" (UID: \"74480c10-4eeb-4a66-99a0-82c49acf75d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s54b4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.297977 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4fe185f3-c64d-47a7-9c93-f40ef8d24d9e-db-sync-config-data\") pod \"barbican-db-sync-vmzjm\" (UID: \"4fe185f3-c64d-47a7-9c93-f40ef8d24d9e\") " pod="openstack/barbican-db-sync-vmzjm" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.298006 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4ef5cd80-2bd8-445c-bb69-79c53f8c888d-horizon-secret-key\") pod \"horizon-58f6c6c467-v5nzg\" (UID: \"4ef5cd80-2bd8-445c-bb69-79c53f8c888d\") " pod="openstack/horizon-58f6c6c467-v5nzg" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.298022 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74480c10-4eeb-4a66-99a0-82c49acf75d4-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-s54b4\" (UID: \"74480c10-4eeb-4a66-99a0-82c49acf75d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s54b4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.298059 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe185f3-c64d-47a7-9c93-f40ef8d24d9e-combined-ca-bundle\") pod \"barbican-db-sync-vmzjm\" (UID: \"4fe185f3-c64d-47a7-9c93-f40ef8d24d9e\") " pod="openstack/barbican-db-sync-vmzjm" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.298081 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm4zc\" (UniqueName: \"kubernetes.io/projected/4fe185f3-c64d-47a7-9c93-f40ef8d24d9e-kube-api-access-bm4zc\") pod \"barbican-db-sync-vmzjm\" (UID: \"4fe185f3-c64d-47a7-9c93-f40ef8d24d9e\") " pod="openstack/barbican-db-sync-vmzjm" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.298107 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d5b79a0-cb51-483b-96b0-8f85b385692c-scripts\") pod \"placement-db-sync-f7nh4\" (UID: \"9d5b79a0-cb51-483b-96b0-8f85b385692c\") " pod="openstack/placement-db-sync-f7nh4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.299031 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ef5cd80-2bd8-445c-bb69-79c53f8c888d-scripts\") pod \"horizon-58f6c6c467-v5nzg\" (UID: \"4ef5cd80-2bd8-445c-bb69-79c53f8c888d\") " pod="openstack/horizon-58f6c6c467-v5nzg" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.301008 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74480c10-4eeb-4a66-99a0-82c49acf75d4-config\") pod \"dnsmasq-dns-785d8bcb8c-s54b4\" (UID: \"74480c10-4eeb-4a66-99a0-82c49acf75d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s54b4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.301982 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ef5cd80-2bd8-445c-bb69-79c53f8c888d-logs\") pod \"horizon-58f6c6c467-v5nzg\" (UID: \"4ef5cd80-2bd8-445c-bb69-79c53f8c888d\") " pod="openstack/horizon-58f6c6c467-v5nzg" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.310103 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ef5cd80-2bd8-445c-bb69-79c53f8c888d-config-data\") pod \"horizon-58f6c6c467-v5nzg\" (UID: \"4ef5cd80-2bd8-445c-bb69-79c53f8c888d\") " pod="openstack/horizon-58f6c6c467-v5nzg" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.310736 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74480c10-4eeb-4a66-99a0-82c49acf75d4-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-s54b4\" (UID: \"74480c10-4eeb-4a66-99a0-82c49acf75d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s54b4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.311260 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74480c10-4eeb-4a66-99a0-82c49acf75d4-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-s54b4\" (UID: \"74480c10-4eeb-4a66-99a0-82c49acf75d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s54b4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.330483 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.330695 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-jzqlk" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.331081 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4ef5cd80-2bd8-445c-bb69-79c53f8c888d-horizon-secret-key\") pod \"horizon-58f6c6c467-v5nzg\" (UID: \"4ef5cd80-2bd8-445c-bb69-79c53f8c888d\") " pod="openstack/horizon-58f6c6c467-v5nzg" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.332288 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74480c10-4eeb-4a66-99a0-82c49acf75d4-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-s54b4\" (UID: \"74480c10-4eeb-4a66-99a0-82c49acf75d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s54b4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.332720 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-s54b4"] Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.333281 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/74480c10-4eeb-4a66-99a0-82c49acf75d4-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-s54b4\" (UID: \"74480c10-4eeb-4a66-99a0-82c49acf75d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s54b4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.347742 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-f7nh4"] Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.375375 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzxkg\" (UniqueName: \"kubernetes.io/projected/74480c10-4eeb-4a66-99a0-82c49acf75d4-kube-api-access-hzxkg\") pod \"dnsmasq-dns-785d8bcb8c-s54b4\" (UID: \"74480c10-4eeb-4a66-99a0-82c49acf75d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s54b4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.382147 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsxlt\" (UniqueName: \"kubernetes.io/projected/4ef5cd80-2bd8-445c-bb69-79c53f8c888d-kube-api-access-hsxlt\") pod \"horizon-58f6c6c467-v5nzg\" (UID: \"4ef5cd80-2bd8-445c-bb69-79c53f8c888d\") " pod="openstack/horizon-58f6c6c467-v5nzg" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.395992 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-vmzjm"] Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.400520 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d5b79a0-cb51-483b-96b0-8f85b385692c-logs\") pod \"placement-db-sync-f7nh4\" (UID: \"9d5b79a0-cb51-483b-96b0-8f85b385692c\") " pod="openstack/placement-db-sync-f7nh4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.400581 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4fe185f3-c64d-47a7-9c93-f40ef8d24d9e-db-sync-config-data\") pod \"barbican-db-sync-vmzjm\" (UID: \"4fe185f3-c64d-47a7-9c93-f40ef8d24d9e\") " pod="openstack/barbican-db-sync-vmzjm" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.400625 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe185f3-c64d-47a7-9c93-f40ef8d24d9e-combined-ca-bundle\") pod \"barbican-db-sync-vmzjm\" (UID: \"4fe185f3-c64d-47a7-9c93-f40ef8d24d9e\") " pod="openstack/barbican-db-sync-vmzjm" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.400650 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm4zc\" (UniqueName: \"kubernetes.io/projected/4fe185f3-c64d-47a7-9c93-f40ef8d24d9e-kube-api-access-bm4zc\") pod \"barbican-db-sync-vmzjm\" (UID: \"4fe185f3-c64d-47a7-9c93-f40ef8d24d9e\") " pod="openstack/barbican-db-sync-vmzjm" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.400670 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d5b79a0-cb51-483b-96b0-8f85b385692c-scripts\") pod \"placement-db-sync-f7nh4\" (UID: \"9d5b79a0-cb51-483b-96b0-8f85b385692c\") " pod="openstack/placement-db-sync-f7nh4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.400729 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwktb\" (UniqueName: \"kubernetes.io/projected/9d5b79a0-cb51-483b-96b0-8f85b385692c-kube-api-access-lwktb\") pod \"placement-db-sync-f7nh4\" (UID: \"9d5b79a0-cb51-483b-96b0-8f85b385692c\") " pod="openstack/placement-db-sync-f7nh4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.400745 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d5b79a0-cb51-483b-96b0-8f85b385692c-combined-ca-bundle\") pod \"placement-db-sync-f7nh4\" (UID: \"9d5b79a0-cb51-483b-96b0-8f85b385692c\") " pod="openstack/placement-db-sync-f7nh4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.400765 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d5b79a0-cb51-483b-96b0-8f85b385692c-config-data\") pod \"placement-db-sync-f7nh4\" (UID: \"9d5b79a0-cb51-483b-96b0-8f85b385692c\") " pod="openstack/placement-db-sync-f7nh4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.403302 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d5b79a0-cb51-483b-96b0-8f85b385692c-logs\") pod \"placement-db-sync-f7nh4\" (UID: \"9d5b79a0-cb51-483b-96b0-8f85b385692c\") " pod="openstack/placement-db-sync-f7nh4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.403807 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d5b79a0-cb51-483b-96b0-8f85b385692c-config-data\") pod \"placement-db-sync-f7nh4\" (UID: \"9d5b79a0-cb51-483b-96b0-8f85b385692c\") " pod="openstack/placement-db-sync-f7nh4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.408496 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe185f3-c64d-47a7-9c93-f40ef8d24d9e-combined-ca-bundle\") pod \"barbican-db-sync-vmzjm\" (UID: \"4fe185f3-c64d-47a7-9c93-f40ef8d24d9e\") " pod="openstack/barbican-db-sync-vmzjm" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.414832 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4fe185f3-c64d-47a7-9c93-f40ef8d24d9e-db-sync-config-data\") pod \"barbican-db-sync-vmzjm\" (UID: \"4fe185f3-c64d-47a7-9c93-f40ef8d24d9e\") " pod="openstack/barbican-db-sync-vmzjm" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.423542 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d5b79a0-cb51-483b-96b0-8f85b385692c-scripts\") pod \"placement-db-sync-f7nh4\" (UID: \"9d5b79a0-cb51-483b-96b0-8f85b385692c\") " pod="openstack/placement-db-sync-f7nh4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.439993 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-64h92" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.440286 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58f6c6c467-v5nzg" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.440435 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d5b79a0-cb51-483b-96b0-8f85b385692c-combined-ca-bundle\") pod \"placement-db-sync-f7nh4\" (UID: \"9d5b79a0-cb51-483b-96b0-8f85b385692c\") " pod="openstack/placement-db-sync-f7nh4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.456880 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm4zc\" (UniqueName: \"kubernetes.io/projected/4fe185f3-c64d-47a7-9c93-f40ef8d24d9e-kube-api-access-bm4zc\") pod \"barbican-db-sync-vmzjm\" (UID: \"4fe185f3-c64d-47a7-9c93-f40ef8d24d9e\") " pod="openstack/barbican-db-sync-vmzjm" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.504208 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-s54b4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.520911 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwktb\" (UniqueName: \"kubernetes.io/projected/9d5b79a0-cb51-483b-96b0-8f85b385692c-kube-api-access-lwktb\") pod \"placement-db-sync-f7nh4\" (UID: \"9d5b79a0-cb51-483b-96b0-8f85b385692c\") " pod="openstack/placement-db-sync-f7nh4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.542136 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.543489 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.555357 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.566685 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-77656968b5-6fspz"] Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.568054 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77656968b5-6fspz" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.585888 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.586126 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-hvljd" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.586244 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.606742 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.617738 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-f7nh4" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.646124 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77656968b5-6fspz"] Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.666599 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.667823 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.681066 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vmzjm" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.698826 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.699005 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.761531 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-p74bp"] Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.811781 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65f96a5d-423f-4959-8115-333b47907fd7-logs\") pod \"horizon-77656968b5-6fspz\" (UID: \"65f96a5d-423f-4959-8115-333b47907fd7\") " pod="openstack/horizon-77656968b5-6fspz" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.811934 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f6c8480-0f15-4b3b-9b68-96948dda62ff-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.812011 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n2h9\" (UniqueName: \"kubernetes.io/projected/65f96a5d-423f-4959-8115-333b47907fd7-kube-api-access-9n2h9\") pod \"horizon-77656968b5-6fspz\" (UID: \"65f96a5d-423f-4959-8115-333b47907fd7\") " pod="openstack/horizon-77656968b5-6fspz" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.812082 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f6c8480-0f15-4b3b-9b68-96948dda62ff-scripts\") pod \"glance-default-external-api-0\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.812161 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.812235 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f6c8480-0f15-4b3b-9b68-96948dda62ff-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.812340 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f6c8480-0f15-4b3b-9b68-96948dda62ff-logs\") pod \"glance-default-external-api-0\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.812426 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0f6c8480-0f15-4b3b-9b68-96948dda62ff-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.812494 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65f96a5d-423f-4959-8115-333b47907fd7-scripts\") pod \"horizon-77656968b5-6fspz\" (UID: \"65f96a5d-423f-4959-8115-333b47907fd7\") " pod="openstack/horizon-77656968b5-6fspz" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.812564 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/65f96a5d-423f-4959-8115-333b47907fd7-horizon-secret-key\") pod \"horizon-77656968b5-6fspz\" (UID: \"65f96a5d-423f-4959-8115-333b47907fd7\") " pod="openstack/horizon-77656968b5-6fspz" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.812659 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f6c8480-0f15-4b3b-9b68-96948dda62ff-config-data\") pod \"glance-default-external-api-0\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.812765 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65f96a5d-423f-4959-8115-333b47907fd7-config-data\") pod \"horizon-77656968b5-6fspz\" (UID: \"65f96a5d-423f-4959-8115-333b47907fd7\") " pod="openstack/horizon-77656968b5-6fspz" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.812868 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwrds\" (UniqueName: \"kubernetes.io/projected/0f6c8480-0f15-4b3b-9b68-96948dda62ff-kube-api-access-xwrds\") pod \"glance-default-external-api-0\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.845662 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:12:45 crc kubenswrapper[4619]: W0126 11:12:45.848836 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3823a07_f914_4121_b3cd_2f3b4a272480.slice/crio-b2a244716aba4053a591a01d4c67d71311664f08b0ae6a756bc9b5ef1ea35751 WatchSource:0}: Error finding container b2a244716aba4053a591a01d4c67d71311664f08b0ae6a756bc9b5ef1ea35751: Status 404 returned error can't find the container with id b2a244716aba4053a591a01d4c67d71311664f08b0ae6a756bc9b5ef1ea35751 Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.878741 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:12:45 crc kubenswrapper[4619]: E0126 11:12:45.879580 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data glance httpd-run kube-api-access-xwrds logs public-tls-certs scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-default-external-api-0" podUID="0f6c8480-0f15-4b3b-9b68-96948dda62ff" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.933240 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f6c8480-0f15-4b3b-9b68-96948dda62ff-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.933506 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5966e7e7-e6e5-49a0-8ced-11271a3b2325-logs\") pod \"glance-default-internal-api-0\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.933631 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrrpj\" (UniqueName: \"kubernetes.io/projected/5966e7e7-e6e5-49a0-8ced-11271a3b2325-kube-api-access-rrrpj\") pod \"glance-default-internal-api-0\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.933727 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.934007 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5966e7e7-e6e5-49a0-8ced-11271a3b2325-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.934099 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5966e7e7-e6e5-49a0-8ced-11271a3b2325-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.934182 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f6c8480-0f15-4b3b-9b68-96948dda62ff-logs\") pod \"glance-default-external-api-0\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.934262 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0f6c8480-0f15-4b3b-9b68-96948dda62ff-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.934331 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65f96a5d-423f-4959-8115-333b47907fd7-scripts\") pod \"horizon-77656968b5-6fspz\" (UID: \"65f96a5d-423f-4959-8115-333b47907fd7\") " pod="openstack/horizon-77656968b5-6fspz" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.935897 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/65f96a5d-423f-4959-8115-333b47907fd7-horizon-secret-key\") pod \"horizon-77656968b5-6fspz\" (UID: \"65f96a5d-423f-4959-8115-333b47907fd7\") " pod="openstack/horizon-77656968b5-6fspz" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.935947 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f6c8480-0f15-4b3b-9b68-96948dda62ff-config-data\") pod \"glance-default-external-api-0\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.935978 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5966e7e7-e6e5-49a0-8ced-11271a3b2325-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.936120 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5966e7e7-e6e5-49a0-8ced-11271a3b2325-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.936172 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65f96a5d-423f-4959-8115-333b47907fd7-config-data\") pod \"horizon-77656968b5-6fspz\" (UID: \"65f96a5d-423f-4959-8115-333b47907fd7\") " pod="openstack/horizon-77656968b5-6fspz" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.936274 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwrds\" (UniqueName: \"kubernetes.io/projected/0f6c8480-0f15-4b3b-9b68-96948dda62ff-kube-api-access-xwrds\") pod \"glance-default-external-api-0\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.936317 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65f96a5d-423f-4959-8115-333b47907fd7-logs\") pod \"horizon-77656968b5-6fspz\" (UID: \"65f96a5d-423f-4959-8115-333b47907fd7\") " pod="openstack/horizon-77656968b5-6fspz" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.936344 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f6c8480-0f15-4b3b-9b68-96948dda62ff-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.936361 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n2h9\" (UniqueName: \"kubernetes.io/projected/65f96a5d-423f-4959-8115-333b47907fd7-kube-api-access-9n2h9\") pod \"horizon-77656968b5-6fspz\" (UID: \"65f96a5d-423f-4959-8115-333b47907fd7\") " pod="openstack/horizon-77656968b5-6fspz" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.936383 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f6c8480-0f15-4b3b-9b68-96948dda62ff-scripts\") pod \"glance-default-external-api-0\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.936446 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5966e7e7-e6e5-49a0-8ced-11271a3b2325-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.937536 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65f96a5d-423f-4959-8115-333b47907fd7-config-data\") pod \"horizon-77656968b5-6fspz\" (UID: \"65f96a5d-423f-4959-8115-333b47907fd7\") " pod="openstack/horizon-77656968b5-6fspz" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.938017 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.938414 4619 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.946972 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65f96a5d-423f-4959-8115-333b47907fd7-scripts\") pod \"horizon-77656968b5-6fspz\" (UID: \"65f96a5d-423f-4959-8115-333b47907fd7\") " pod="openstack/horizon-77656968b5-6fspz" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.947755 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f6c8480-0f15-4b3b-9b68-96948dda62ff-logs\") pod \"glance-default-external-api-0\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.948053 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0f6c8480-0f15-4b3b-9b68-96948dda62ff-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.949553 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65f96a5d-423f-4959-8115-333b47907fd7-logs\") pod \"horizon-77656968b5-6fspz\" (UID: \"65f96a5d-423f-4959-8115-333b47907fd7\") " pod="openstack/horizon-77656968b5-6fspz" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.950311 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f6c8480-0f15-4b3b-9b68-96948dda62ff-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.972747 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f6c8480-0f15-4b3b-9b68-96948dda62ff-scripts\") pod \"glance-default-external-api-0\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.984142 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n2h9\" (UniqueName: \"kubernetes.io/projected/65f96a5d-423f-4959-8115-333b47907fd7-kube-api-access-9n2h9\") pod \"horizon-77656968b5-6fspz\" (UID: \"65f96a5d-423f-4959-8115-333b47907fd7\") " pod="openstack/horizon-77656968b5-6fspz" Jan 26 11:12:45 crc kubenswrapper[4619]: I0126 11:12:45.986823 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/65f96a5d-423f-4959-8115-333b47907fd7-horizon-secret-key\") pod \"horizon-77656968b5-6fspz\" (UID: \"65f96a5d-423f-4959-8115-333b47907fd7\") " pod="openstack/horizon-77656968b5-6fspz" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.005973 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f6c8480-0f15-4b3b-9b68-96948dda62ff-config-data\") pod \"glance-default-external-api-0\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.019737 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f6c8480-0f15-4b3b-9b68-96948dda62ff-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.031817 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwrds\" (UniqueName: \"kubernetes.io/projected/0f6c8480-0f15-4b3b-9b68-96948dda62ff-kube-api-access-xwrds\") pod \"glance-default-external-api-0\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.072016 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5966e7e7-e6e5-49a0-8ced-11271a3b2325-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.072277 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5966e7e7-e6e5-49a0-8ced-11271a3b2325-logs\") pod \"glance-default-internal-api-0\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.072337 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrrpj\" (UniqueName: \"kubernetes.io/projected/5966e7e7-e6e5-49a0-8ced-11271a3b2325-kube-api-access-rrrpj\") pod \"glance-default-internal-api-0\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.072373 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.072390 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5966e7e7-e6e5-49a0-8ced-11271a3b2325-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.072433 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5966e7e7-e6e5-49a0-8ced-11271a3b2325-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.072534 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5966e7e7-e6e5-49a0-8ced-11271a3b2325-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.072694 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5966e7e7-e6e5-49a0-8ced-11271a3b2325-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.077912 4619 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.085718 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5966e7e7-e6e5-49a0-8ced-11271a3b2325-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.092038 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5966e7e7-e6e5-49a0-8ced-11271a3b2325-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.101559 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77656968b5-6fspz" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.102931 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5966e7e7-e6e5-49a0-8ced-11271a3b2325-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.120483 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5966e7e7-e6e5-49a0-8ced-11271a3b2325-logs\") pod \"glance-default-internal-api-0\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.175122 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-djqh6"] Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.181203 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5966e7e7-e6e5-49a0-8ced-11271a3b2325-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.203097 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5966e7e7-e6e5-49a0-8ced-11271a3b2325-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.216939 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-mwg4l" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.217170 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrrpj\" (UniqueName: \"kubernetes.io/projected/5966e7e7-e6e5-49a0-8ced-11271a3b2325-kube-api-access-rrrpj\") pod \"glance-default-internal-api-0\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.249383 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.292010 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-mwg4l" event={"ID":"8a2524a2-4811-44c1-9a0d-9050ba69ea1c","Type":"ContainerDied","Data":"e65595e70df6c018157dda417a3f9ad722774b5969db56b33fb587f5be8aa3bb"} Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.292104 4619 scope.go:117] "RemoveContainer" containerID="2a4b078870c4fac1504af6d5c02c3149efb0f01dfa325b9d995dc89ee3575b33" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.292737 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-mwg4l" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.327811 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.329154 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p74bp" event={"ID":"e3823a07-f914-4121-b3cd-2f3b4a272480","Type":"ContainerStarted","Data":"b2a244716aba4053a591a01d4c67d71311664f08b0ae6a756bc9b5ef1ea35751"} Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.381833 4619 scope.go:117] "RemoveContainer" containerID="4130fd0a4cd1630bb28d83fd6bc01587dbeb442827694f8c05bd27660a69f600" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.411492 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-config\") pod \"8a2524a2-4811-44c1-9a0d-9050ba69ea1c\" (UID: \"8a2524a2-4811-44c1-9a0d-9050ba69ea1c\") " Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.411538 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-ovsdbserver-nb\") pod \"8a2524a2-4811-44c1-9a0d-9050ba69ea1c\" (UID: \"8a2524a2-4811-44c1-9a0d-9050ba69ea1c\") " Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.411684 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-ovsdbserver-sb\") pod \"8a2524a2-4811-44c1-9a0d-9050ba69ea1c\" (UID: \"8a2524a2-4811-44c1-9a0d-9050ba69ea1c\") " Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.411714 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grkqz\" (UniqueName: \"kubernetes.io/projected/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-kube-api-access-grkqz\") pod \"8a2524a2-4811-44c1-9a0d-9050ba69ea1c\" (UID: \"8a2524a2-4811-44c1-9a0d-9050ba69ea1c\") " Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.411769 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-dns-svc\") pod \"8a2524a2-4811-44c1-9a0d-9050ba69ea1c\" (UID: \"8a2524a2-4811-44c1-9a0d-9050ba69ea1c\") " Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.411828 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-dns-swift-storage-0\") pod \"8a2524a2-4811-44c1-9a0d-9050ba69ea1c\" (UID: \"8a2524a2-4811-44c1-9a0d-9050ba69ea1c\") " Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.415483 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.420860 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.446696 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.467141 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-kube-api-access-grkqz" (OuterVolumeSpecName: "kube-api-access-grkqz") pod "8a2524a2-4811-44c1-9a0d-9050ba69ea1c" (UID: "8a2524a2-4811-44c1-9a0d-9050ba69ea1c"). InnerVolumeSpecName "kube-api-access-grkqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.520848 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f6c8480-0f15-4b3b-9b68-96948dda62ff-logs\") pod \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.520881 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f6c8480-0f15-4b3b-9b68-96948dda62ff-combined-ca-bundle\") pod \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.520910 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwrds\" (UniqueName: \"kubernetes.io/projected/0f6c8480-0f15-4b3b-9b68-96948dda62ff-kube-api-access-xwrds\") pod \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.520959 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.520995 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f6c8480-0f15-4b3b-9b68-96948dda62ff-public-tls-certs\") pod \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.521045 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0f6c8480-0f15-4b3b-9b68-96948dda62ff-httpd-run\") pod \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.521065 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f6c8480-0f15-4b3b-9b68-96948dda62ff-config-data\") pod \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.521106 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f6c8480-0f15-4b3b-9b68-96948dda62ff-scripts\") pod \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\" (UID: \"0f6c8480-0f15-4b3b-9b68-96948dda62ff\") " Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.521436 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grkqz\" (UniqueName: \"kubernetes.io/projected/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-kube-api-access-grkqz\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.532493 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f6c8480-0f15-4b3b-9b68-96948dda62ff-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0f6c8480-0f15-4b3b-9b68-96948dda62ff" (UID: "0f6c8480-0f15-4b3b-9b68-96948dda62ff"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.532947 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f6c8480-0f15-4b3b-9b68-96948dda62ff-logs" (OuterVolumeSpecName: "logs") pod "0f6c8480-0f15-4b3b-9b68-96948dda62ff" (UID: "0f6c8480-0f15-4b3b-9b68-96948dda62ff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.553081 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "0f6c8480-0f15-4b3b-9b68-96948dda62ff" (UID: "0f6c8480-0f15-4b3b-9b68-96948dda62ff"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.559813 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f6c8480-0f15-4b3b-9b68-96948dda62ff-scripts" (OuterVolumeSpecName: "scripts") pod "0f6c8480-0f15-4b3b-9b68-96948dda62ff" (UID: "0f6c8480-0f15-4b3b-9b68-96948dda62ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.559878 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f6c8480-0f15-4b3b-9b68-96948dda62ff-kube-api-access-xwrds" (OuterVolumeSpecName: "kube-api-access-xwrds") pod "0f6c8480-0f15-4b3b-9b68-96948dda62ff" (UID: "0f6c8480-0f15-4b3b-9b68-96948dda62ff"). InnerVolumeSpecName "kube-api-access-xwrds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.623829 4619 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f6c8480-0f15-4b3b-9b68-96948dda62ff-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.623856 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwrds\" (UniqueName: \"kubernetes.io/projected/0f6c8480-0f15-4b3b-9b68-96948dda62ff-kube-api-access-xwrds\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.623887 4619 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.623896 4619 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0f6c8480-0f15-4b3b-9b68-96948dda62ff-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.623904 4619 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f6c8480-0f15-4b3b-9b68-96948dda62ff-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.625526 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-config" (OuterVolumeSpecName: "config") pod "8a2524a2-4811-44c1-9a0d-9050ba69ea1c" (UID: "8a2524a2-4811-44c1-9a0d-9050ba69ea1c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.629297 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f6c8480-0f15-4b3b-9b68-96948dda62ff-config-data" (OuterVolumeSpecName: "config-data") pod "0f6c8480-0f15-4b3b-9b68-96948dda62ff" (UID: "0f6c8480-0f15-4b3b-9b68-96948dda62ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.643193 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f6c8480-0f15-4b3b-9b68-96948dda62ff-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0f6c8480-0f15-4b3b-9b68-96948dda62ff" (UID: "0f6c8480-0f15-4b3b-9b68-96948dda62ff"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.665882 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f6c8480-0f15-4b3b-9b68-96948dda62ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f6c8480-0f15-4b3b-9b68-96948dda62ff" (UID: "0f6c8480-0f15-4b3b-9b68-96948dda62ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.697539 4619 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.699960 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8a2524a2-4811-44c1-9a0d-9050ba69ea1c" (UID: "8a2524a2-4811-44c1-9a0d-9050ba69ea1c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.744039 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f6c8480-0f15-4b3b-9b68-96948dda62ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.744057 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.744065 4619 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.744074 4619 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.744082 4619 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f6c8480-0f15-4b3b-9b68-96948dda62ff-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.744090 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f6c8480-0f15-4b3b-9b68-96948dda62ff-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.767303 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8a2524a2-4811-44c1-9a0d-9050ba69ea1c" (UID: "8a2524a2-4811-44c1-9a0d-9050ba69ea1c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.784792 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8a2524a2-4811-44c1-9a0d-9050ba69ea1c" (UID: "8a2524a2-4811-44c1-9a0d-9050ba69ea1c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.793086 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8a2524a2-4811-44c1-9a0d-9050ba69ea1c" (UID: "8a2524a2-4811-44c1-9a0d-9050ba69ea1c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.831684 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-64h92"] Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.846832 4619 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.846869 4619 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.846882 4619 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a2524a2-4811-44c1-9a0d-9050ba69ea1c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.879370 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.922022 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7zn9h"] Jan 26 11:12:46 crc kubenswrapper[4619]: I0126 11:12:46.984216 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-mwg4l"] Jan 26 11:12:47 crc kubenswrapper[4619]: I0126 11:12:47.003266 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-mwg4l"] Jan 26 11:12:47 crc kubenswrapper[4619]: I0126 11:12:47.104718 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-58f6c6c467-v5nzg"] Jan 26 11:12:47 crc kubenswrapper[4619]: I0126 11:12:47.315948 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a2524a2-4811-44c1-9a0d-9050ba69ea1c" path="/var/lib/kubelet/pods/8a2524a2-4811-44c1-9a0d-9050ba69ea1c/volumes" Jan 26 11:12:47 crc kubenswrapper[4619]: I0126 11:12:47.319187 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-s54b4"] Jan 26 11:12:47 crc kubenswrapper[4619]: I0126 11:12:47.319215 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-f7nh4"] Jan 26 11:12:47 crc kubenswrapper[4619]: I0126 11:12:47.359993 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-s54b4" event={"ID":"74480c10-4eeb-4a66-99a0-82c49acf75d4","Type":"ContainerStarted","Data":"6fc1fc9e4b693f59d082d85e3c1299d631fda01ca435aa563ed769a704f9958a"} Jan 26 11:12:47 crc kubenswrapper[4619]: I0126 11:12:47.365705 4619 generic.go:334] "Generic (PLEG): container finished" podID="dceeb899-8622-413b-ac65-ac5e4a51ac8e" containerID="0a6ab986c3fffdcf93221f1faad7bf21766116422d367bad85279782a25a0fe5" exitCode=0 Jan 26 11:12:47 crc kubenswrapper[4619]: I0126 11:12:47.365908 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-djqh6" event={"ID":"dceeb899-8622-413b-ac65-ac5e4a51ac8e","Type":"ContainerDied","Data":"0a6ab986c3fffdcf93221f1faad7bf21766116422d367bad85279782a25a0fe5"} Jan 26 11:12:47 crc kubenswrapper[4619]: I0126 11:12:47.365993 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-djqh6" event={"ID":"dceeb899-8622-413b-ac65-ac5e4a51ac8e","Type":"ContainerStarted","Data":"f937668ed7615f3f6e7ab36b5a59853f5f907a9a5f86c1c1088e7ebf94dc39ea"} Jan 26 11:12:47 crc kubenswrapper[4619]: I0126 11:12:47.385574 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-64h92" event={"ID":"76fffa56-d701-41ca-8a74-4c72015701f4","Type":"ContainerStarted","Data":"0f60d824264b73a6cd33dcdee653553331bc81616e47b21c9610ac8f258d23b4"} Jan 26 11:12:47 crc kubenswrapper[4619]: I0126 11:12:47.385649 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-64h92" event={"ID":"76fffa56-d701-41ca-8a74-4c72015701f4","Type":"ContainerStarted","Data":"20a138c6edb89962e7c5b0a31f95c01ed773a0676b63908d4e1b314d61c8538d"} Jan 26 11:12:47 crc kubenswrapper[4619]: I0126 11:12:47.395444 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a59562e9-8459-4c22-a737-f6bde480fc2b","Type":"ContainerStarted","Data":"2de32133eec8fa8e3c3169bdcb82d5fc750b7904abd8864e2b80f6fd9ed7d1bc"} Jan 26 11:12:47 crc kubenswrapper[4619]: I0126 11:12:47.410102 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-f7nh4" event={"ID":"9d5b79a0-cb51-483b-96b0-8f85b385692c","Type":"ContainerStarted","Data":"57e23cace7f90d9465bbaf9c16acb7940da71a536303017e5f04dc7bd9888cff"} Jan 26 11:12:47 crc kubenswrapper[4619]: I0126 11:12:47.412530 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-vmzjm"] Jan 26 11:12:47 crc kubenswrapper[4619]: I0126 11:12:47.414183 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58f6c6c467-v5nzg" event={"ID":"4ef5cd80-2bd8-445c-bb69-79c53f8c888d","Type":"ContainerStarted","Data":"9d5ec6df7b826faf0b7c070a7e117b9e14c0a0e9bcd73fdc987f6ddcc38963ab"} Jan 26 11:12:47 crc kubenswrapper[4619]: I0126 11:12:47.419062 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p74bp" event={"ID":"e3823a07-f914-4121-b3cd-2f3b4a272480","Type":"ContainerStarted","Data":"66a6d798b2456c3065d4a03defe6cbd9ab5e5227621f280fde91db675e2f83aa"} Jan 26 11:12:47 crc kubenswrapper[4619]: I0126 11:12:47.435083 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 11:12:47 crc kubenswrapper[4619]: I0126 11:12:47.435348 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7zn9h" event={"ID":"42f56f30-76de-408b-bbe1-8ef2b764f26b","Type":"ContainerStarted","Data":"638a4d1250dddf57a8e42bb54ccd02cc3175cfc17ad087d2bab6e0bbb65f67d2"} Jan 26 11:12:47 crc kubenswrapper[4619]: I0126 11:12:47.451826 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77656968b5-6fspz"] Jan 26 11:12:47 crc kubenswrapper[4619]: I0126 11:12:47.509716 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-64h92" podStartSLOduration=3.509698706 podStartE2EDuration="3.509698706s" podCreationTimestamp="2026-01-26 11:12:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:12:47.459804807 +0000 UTC m=+1066.493845523" watchObservedRunningTime="2026-01-26 11:12:47.509698706 +0000 UTC m=+1066.543739422" Jan 26 11:12:47 crc kubenswrapper[4619]: I0126 11:12:47.511952 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-p74bp" podStartSLOduration=3.511944519 podStartE2EDuration="3.511944519s" podCreationTimestamp="2026-01-26 11:12:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:12:47.496067747 +0000 UTC m=+1066.530108463" watchObservedRunningTime="2026-01-26 11:12:47.511944519 +0000 UTC m=+1066.545985225" Jan 26 11:12:47 crc kubenswrapper[4619]: I0126 11:12:47.526925 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:12:47 crc kubenswrapper[4619]: W0126 11:12:47.544578 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5966e7e7_e6e5_49a0_8ced_11271a3b2325.slice/crio-3d79f823a06db1709a92ecd0847bdfa79fae10df014e2b26b535246461f95162 WatchSource:0}: Error finding container 3d79f823a06db1709a92ecd0847bdfa79fae10df014e2b26b535246461f95162: Status 404 returned error can't find the container with id 3d79f823a06db1709a92ecd0847bdfa79fae10df014e2b26b535246461f95162 Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:47.757541 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:47.841661 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:47.872749 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:12:48 crc kubenswrapper[4619]: E0126 11:12:47.873177 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a2524a2-4811-44c1-9a0d-9050ba69ea1c" containerName="dnsmasq-dns" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:47.873192 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a2524a2-4811-44c1-9a0d-9050ba69ea1c" containerName="dnsmasq-dns" Jan 26 11:12:48 crc kubenswrapper[4619]: E0126 11:12:47.873215 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a2524a2-4811-44c1-9a0d-9050ba69ea1c" containerName="init" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:47.873220 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a2524a2-4811-44c1-9a0d-9050ba69ea1c" containerName="init" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:47.873411 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a2524a2-4811-44c1-9a0d-9050ba69ea1c" containerName="dnsmasq-dns" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:47.874395 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:47.878483 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:47.880222 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:47.881273 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.014087 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b351f780-1982-4290-8185-798eb12aecad-scripts\") pod \"glance-default-external-api-0\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.021904 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hn7c\" (UniqueName: \"kubernetes.io/projected/b351f780-1982-4290-8185-798eb12aecad-kube-api-access-4hn7c\") pod \"glance-default-external-api-0\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.022018 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b351f780-1982-4290-8185-798eb12aecad-logs\") pod \"glance-default-external-api-0\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.022060 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b351f780-1982-4290-8185-798eb12aecad-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.022132 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b351f780-1982-4290-8185-798eb12aecad-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.022160 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.022196 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b351f780-1982-4290-8185-798eb12aecad-config-data\") pod \"glance-default-external-api-0\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.022215 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b351f780-1982-4290-8185-798eb12aecad-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.131197 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-djqh6" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.136361 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b351f780-1982-4290-8185-798eb12aecad-scripts\") pod \"glance-default-external-api-0\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.157404 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hn7c\" (UniqueName: \"kubernetes.io/projected/b351f780-1982-4290-8185-798eb12aecad-kube-api-access-4hn7c\") pod \"glance-default-external-api-0\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.157517 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b351f780-1982-4290-8185-798eb12aecad-logs\") pod \"glance-default-external-api-0\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.157560 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b351f780-1982-4290-8185-798eb12aecad-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.157646 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b351f780-1982-4290-8185-798eb12aecad-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.157674 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.157688 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b351f780-1982-4290-8185-798eb12aecad-config-data\") pod \"glance-default-external-api-0\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.157709 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b351f780-1982-4290-8185-798eb12aecad-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.160107 4619 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.164090 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b351f780-1982-4290-8185-798eb12aecad-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.174066 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b351f780-1982-4290-8185-798eb12aecad-config-data\") pod \"glance-default-external-api-0\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.179473 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b351f780-1982-4290-8185-798eb12aecad-logs\") pod \"glance-default-external-api-0\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.190363 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b351f780-1982-4290-8185-798eb12aecad-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.213762 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b351f780-1982-4290-8185-798eb12aecad-scripts\") pod \"glance-default-external-api-0\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.223235 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b351f780-1982-4290-8185-798eb12aecad-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.258409 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvvw8\" (UniqueName: \"kubernetes.io/projected/dceeb899-8622-413b-ac65-ac5e4a51ac8e-kube-api-access-cvvw8\") pod \"dceeb899-8622-413b-ac65-ac5e4a51ac8e\" (UID: \"dceeb899-8622-413b-ac65-ac5e4a51ac8e\") " Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.258461 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dceeb899-8622-413b-ac65-ac5e4a51ac8e-config\") pod \"dceeb899-8622-413b-ac65-ac5e4a51ac8e\" (UID: \"dceeb899-8622-413b-ac65-ac5e4a51ac8e\") " Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.258482 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dceeb899-8622-413b-ac65-ac5e4a51ac8e-dns-swift-storage-0\") pod \"dceeb899-8622-413b-ac65-ac5e4a51ac8e\" (UID: \"dceeb899-8622-413b-ac65-ac5e4a51ac8e\") " Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.258545 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dceeb899-8622-413b-ac65-ac5e4a51ac8e-dns-svc\") pod \"dceeb899-8622-413b-ac65-ac5e4a51ac8e\" (UID: \"dceeb899-8622-413b-ac65-ac5e4a51ac8e\") " Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.258703 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dceeb899-8622-413b-ac65-ac5e4a51ac8e-ovsdbserver-sb\") pod \"dceeb899-8622-413b-ac65-ac5e4a51ac8e\" (UID: \"dceeb899-8622-413b-ac65-ac5e4a51ac8e\") " Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.258723 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dceeb899-8622-413b-ac65-ac5e4a51ac8e-ovsdbserver-nb\") pod \"dceeb899-8622-413b-ac65-ac5e4a51ac8e\" (UID: \"dceeb899-8622-413b-ac65-ac5e4a51ac8e\") " Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.260022 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hn7c\" (UniqueName: \"kubernetes.io/projected/b351f780-1982-4290-8185-798eb12aecad-kube-api-access-4hn7c\") pod \"glance-default-external-api-0\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.278148 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:12:48 crc kubenswrapper[4619]: E0126 11:12:48.278807 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[glance], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-default-external-api-0" podUID="b351f780-1982-4290-8185-798eb12aecad" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.279453 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dceeb899-8622-413b-ac65-ac5e4a51ac8e-kube-api-access-cvvw8" (OuterVolumeSpecName: "kube-api-access-cvvw8") pod "dceeb899-8622-413b-ac65-ac5e4a51ac8e" (UID: "dceeb899-8622-413b-ac65-ac5e4a51ac8e"). InnerVolumeSpecName "kube-api-access-cvvw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.341913 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-58f6c6c467-v5nzg"] Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.344171 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dceeb899-8622-413b-ac65-ac5e4a51ac8e-config" (OuterVolumeSpecName: "config") pod "dceeb899-8622-413b-ac65-ac5e4a51ac8e" (UID: "dceeb899-8622-413b-ac65-ac5e4a51ac8e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.344358 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.361239 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvvw8\" (UniqueName: \"kubernetes.io/projected/dceeb899-8622-413b-ac65-ac5e4a51ac8e-kube-api-access-cvvw8\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.361265 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dceeb899-8622-413b-ac65-ac5e4a51ac8e-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.413500 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6c6cbcb8f7-2lvnl"] Jan 26 11:12:48 crc kubenswrapper[4619]: E0126 11:12:48.413864 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dceeb899-8622-413b-ac65-ac5e4a51ac8e" containerName="init" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.413875 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="dceeb899-8622-413b-ac65-ac5e4a51ac8e" containerName="init" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.414038 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="dceeb899-8622-413b-ac65-ac5e4a51ac8e" containerName="init" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.433260 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.433376 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c6cbcb8f7-2lvnl" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.450668 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c6cbcb8f7-2lvnl"] Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.481445 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dceeb899-8622-413b-ac65-ac5e4a51ac8e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dceeb899-8622-413b-ac65-ac5e4a51ac8e" (UID: "dceeb899-8622-413b-ac65-ac5e4a51ac8e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.483228 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/985466ba-fea8-4557-ae46-94d1cc13fee1-scripts\") pod \"horizon-6c6cbcb8f7-2lvnl\" (UID: \"985466ba-fea8-4557-ae46-94d1cc13fee1\") " pod="openstack/horizon-6c6cbcb8f7-2lvnl" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.483292 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/985466ba-fea8-4557-ae46-94d1cc13fee1-logs\") pod \"horizon-6c6cbcb8f7-2lvnl\" (UID: \"985466ba-fea8-4557-ae46-94d1cc13fee1\") " pod="openstack/horizon-6c6cbcb8f7-2lvnl" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.483308 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/985466ba-fea8-4557-ae46-94d1cc13fee1-config-data\") pod \"horizon-6c6cbcb8f7-2lvnl\" (UID: \"985466ba-fea8-4557-ae46-94d1cc13fee1\") " pod="openstack/horizon-6c6cbcb8f7-2lvnl" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.483343 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/985466ba-fea8-4557-ae46-94d1cc13fee1-horizon-secret-key\") pod \"horizon-6c6cbcb8f7-2lvnl\" (UID: \"985466ba-fea8-4557-ae46-94d1cc13fee1\") " pod="openstack/horizon-6c6cbcb8f7-2lvnl" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.483389 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hrh2\" (UniqueName: \"kubernetes.io/projected/985466ba-fea8-4557-ae46-94d1cc13fee1-kube-api-access-9hrh2\") pod \"horizon-6c6cbcb8f7-2lvnl\" (UID: \"985466ba-fea8-4557-ae46-94d1cc13fee1\") " pod="openstack/horizon-6c6cbcb8f7-2lvnl" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.483478 4619 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dceeb899-8622-413b-ac65-ac5e4a51ac8e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.500457 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-djqh6" event={"ID":"dceeb899-8622-413b-ac65-ac5e4a51ac8e","Type":"ContainerDied","Data":"f937668ed7615f3f6e7ab36b5a59853f5f907a9a5f86c1c1088e7ebf94dc39ea"} Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.500511 4619 scope.go:117] "RemoveContainer" containerID="0a6ab986c3fffdcf93221f1faad7bf21766116422d367bad85279782a25a0fe5" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.500659 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-djqh6" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.519712 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dceeb899-8622-413b-ac65-ac5e4a51ac8e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dceeb899-8622-413b-ac65-ac5e4a51ac8e" (UID: "dceeb899-8622-413b-ac65-ac5e4a51ac8e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.519876 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vmzjm" event={"ID":"4fe185f3-c64d-47a7-9c93-f40ef8d24d9e","Type":"ContainerStarted","Data":"12ceab4058ee6945f0d671053098031e5003db2ee2f3493931679fa96f5e97e2"} Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.521496 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dceeb899-8622-413b-ac65-ac5e4a51ac8e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dceeb899-8622-413b-ac65-ac5e4a51ac8e" (UID: "dceeb899-8622-413b-ac65-ac5e4a51ac8e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.528650 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dceeb899-8622-413b-ac65-ac5e4a51ac8e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dceeb899-8622-413b-ac65-ac5e4a51ac8e" (UID: "dceeb899-8622-413b-ac65-ac5e4a51ac8e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.535048 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5966e7e7-e6e5-49a0-8ced-11271a3b2325","Type":"ContainerStarted","Data":"3d79f823a06db1709a92ecd0847bdfa79fae10df014e2b26b535246461f95162"} Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.545145 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77656968b5-6fspz" event={"ID":"65f96a5d-423f-4959-8115-333b47907fd7","Type":"ContainerStarted","Data":"3287ccaa0c3d090dc56427349a3501b5fa2407b55e845ccf4f041a24a2446025"} Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.553245 4619 generic.go:334] "Generic (PLEG): container finished" podID="74480c10-4eeb-4a66-99a0-82c49acf75d4" containerID="49dbb3362f95c5fc5ee1202bb056b6c1a772e2296b21592ad081154c3727bc32" exitCode=0 Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.554541 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-s54b4" event={"ID":"74480c10-4eeb-4a66-99a0-82c49acf75d4","Type":"ContainerDied","Data":"49dbb3362f95c5fc5ee1202bb056b6c1a772e2296b21592ad081154c3727bc32"} Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.554844 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.585417 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hrh2\" (UniqueName: \"kubernetes.io/projected/985466ba-fea8-4557-ae46-94d1cc13fee1-kube-api-access-9hrh2\") pod \"horizon-6c6cbcb8f7-2lvnl\" (UID: \"985466ba-fea8-4557-ae46-94d1cc13fee1\") " pod="openstack/horizon-6c6cbcb8f7-2lvnl" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.585601 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/985466ba-fea8-4557-ae46-94d1cc13fee1-scripts\") pod \"horizon-6c6cbcb8f7-2lvnl\" (UID: \"985466ba-fea8-4557-ae46-94d1cc13fee1\") " pod="openstack/horizon-6c6cbcb8f7-2lvnl" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.585674 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/985466ba-fea8-4557-ae46-94d1cc13fee1-logs\") pod \"horizon-6c6cbcb8f7-2lvnl\" (UID: \"985466ba-fea8-4557-ae46-94d1cc13fee1\") " pod="openstack/horizon-6c6cbcb8f7-2lvnl" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.585696 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/985466ba-fea8-4557-ae46-94d1cc13fee1-config-data\") pod \"horizon-6c6cbcb8f7-2lvnl\" (UID: \"985466ba-fea8-4557-ae46-94d1cc13fee1\") " pod="openstack/horizon-6c6cbcb8f7-2lvnl" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.585739 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/985466ba-fea8-4557-ae46-94d1cc13fee1-horizon-secret-key\") pod \"horizon-6c6cbcb8f7-2lvnl\" (UID: \"985466ba-fea8-4557-ae46-94d1cc13fee1\") " pod="openstack/horizon-6c6cbcb8f7-2lvnl" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.585846 4619 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dceeb899-8622-413b-ac65-ac5e4a51ac8e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.585859 4619 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dceeb899-8622-413b-ac65-ac5e4a51ac8e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.585869 4619 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dceeb899-8622-413b-ac65-ac5e4a51ac8e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.606075 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/985466ba-fea8-4557-ae46-94d1cc13fee1-logs\") pod \"horizon-6c6cbcb8f7-2lvnl\" (UID: \"985466ba-fea8-4557-ae46-94d1cc13fee1\") " pod="openstack/horizon-6c6cbcb8f7-2lvnl" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.607240 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/985466ba-fea8-4557-ae46-94d1cc13fee1-config-data\") pod \"horizon-6c6cbcb8f7-2lvnl\" (UID: \"985466ba-fea8-4557-ae46-94d1cc13fee1\") " pod="openstack/horizon-6c6cbcb8f7-2lvnl" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.609597 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/985466ba-fea8-4557-ae46-94d1cc13fee1-scripts\") pod \"horizon-6c6cbcb8f7-2lvnl\" (UID: \"985466ba-fea8-4557-ae46-94d1cc13fee1\") " pod="openstack/horizon-6c6cbcb8f7-2lvnl" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.642038 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.642408 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/985466ba-fea8-4557-ae46-94d1cc13fee1-horizon-secret-key\") pod \"horizon-6c6cbcb8f7-2lvnl\" (UID: \"985466ba-fea8-4557-ae46-94d1cc13fee1\") " pod="openstack/horizon-6c6cbcb8f7-2lvnl" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.690675 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hn7c\" (UniqueName: \"kubernetes.io/projected/b351f780-1982-4290-8185-798eb12aecad-kube-api-access-4hn7c\") pod \"b351f780-1982-4290-8185-798eb12aecad\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.690742 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"b351f780-1982-4290-8185-798eb12aecad\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.690897 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b351f780-1982-4290-8185-798eb12aecad-combined-ca-bundle\") pod \"b351f780-1982-4290-8185-798eb12aecad\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.690917 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b351f780-1982-4290-8185-798eb12aecad-public-tls-certs\") pod \"b351f780-1982-4290-8185-798eb12aecad\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.690984 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b351f780-1982-4290-8185-798eb12aecad-scripts\") pod \"b351f780-1982-4290-8185-798eb12aecad\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.691070 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b351f780-1982-4290-8185-798eb12aecad-config-data\") pod \"b351f780-1982-4290-8185-798eb12aecad\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.691102 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b351f780-1982-4290-8185-798eb12aecad-logs\") pod \"b351f780-1982-4290-8185-798eb12aecad\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.691117 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b351f780-1982-4290-8185-798eb12aecad-httpd-run\") pod \"b351f780-1982-4290-8185-798eb12aecad\" (UID: \"b351f780-1982-4290-8185-798eb12aecad\") " Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.694138 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b351f780-1982-4290-8185-798eb12aecad-logs" (OuterVolumeSpecName: "logs") pod "b351f780-1982-4290-8185-798eb12aecad" (UID: "b351f780-1982-4290-8185-798eb12aecad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.695862 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b351f780-1982-4290-8185-798eb12aecad-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b351f780-1982-4290-8185-798eb12aecad" (UID: "b351f780-1982-4290-8185-798eb12aecad"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.710056 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b351f780-1982-4290-8185-798eb12aecad-kube-api-access-4hn7c" (OuterVolumeSpecName: "kube-api-access-4hn7c") pod "b351f780-1982-4290-8185-798eb12aecad" (UID: "b351f780-1982-4290-8185-798eb12aecad"). InnerVolumeSpecName "kube-api-access-4hn7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.710155 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "b351f780-1982-4290-8185-798eb12aecad" (UID: "b351f780-1982-4290-8185-798eb12aecad"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.710380 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hrh2\" (UniqueName: \"kubernetes.io/projected/985466ba-fea8-4557-ae46-94d1cc13fee1-kube-api-access-9hrh2\") pod \"horizon-6c6cbcb8f7-2lvnl\" (UID: \"985466ba-fea8-4557-ae46-94d1cc13fee1\") " pod="openstack/horizon-6c6cbcb8f7-2lvnl" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.710585 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b351f780-1982-4290-8185-798eb12aecad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b351f780-1982-4290-8185-798eb12aecad" (UID: "b351f780-1982-4290-8185-798eb12aecad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.712775 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b351f780-1982-4290-8185-798eb12aecad-scripts" (OuterVolumeSpecName: "scripts") pod "b351f780-1982-4290-8185-798eb12aecad" (UID: "b351f780-1982-4290-8185-798eb12aecad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.713714 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b351f780-1982-4290-8185-798eb12aecad-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b351f780-1982-4290-8185-798eb12aecad" (UID: "b351f780-1982-4290-8185-798eb12aecad"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.715762 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b351f780-1982-4290-8185-798eb12aecad-config-data" (OuterVolumeSpecName: "config-data") pod "b351f780-1982-4290-8185-798eb12aecad" (UID: "b351f780-1982-4290-8185-798eb12aecad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.795743 4619 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.795786 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b351f780-1982-4290-8185-798eb12aecad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.795798 4619 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b351f780-1982-4290-8185-798eb12aecad-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.795806 4619 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b351f780-1982-4290-8185-798eb12aecad-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.795816 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b351f780-1982-4290-8185-798eb12aecad-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.795824 4619 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b351f780-1982-4290-8185-798eb12aecad-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.795831 4619 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b351f780-1982-4290-8185-798eb12aecad-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.795839 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hn7c\" (UniqueName: \"kubernetes.io/projected/b351f780-1982-4290-8185-798eb12aecad-kube-api-access-4hn7c\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.847477 4619 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.897267 4619 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.908996 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c6cbcb8f7-2lvnl" Jan 26 11:12:48 crc kubenswrapper[4619]: I0126 11:12:48.925303 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.000129 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-djqh6"] Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.025099 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-djqh6"] Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.291391 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f6c8480-0f15-4b3b-9b68-96948dda62ff" path="/var/lib/kubelet/pods/0f6c8480-0f15-4b3b-9b68-96948dda62ff/volumes" Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.292199 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dceeb899-8622-413b-ac65-ac5e4a51ac8e" path="/var/lib/kubelet/pods/dceeb899-8622-413b-ac65-ac5e4a51ac8e/volumes" Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.595431 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.682156 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.704926 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.713190 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.715084 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.721718 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.722152 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.749643 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.813676 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c6cbcb8f7-2lvnl"] Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.827261 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx8c9\" (UniqueName: \"kubernetes.io/projected/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-kube-api-access-hx8c9\") pod \"glance-default-external-api-0\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.827326 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.827353 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.827384 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.827404 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-config-data\") pod \"glance-default-external-api-0\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.827431 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.827488 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-logs\") pod \"glance-default-external-api-0\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.827510 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-scripts\") pod \"glance-default-external-api-0\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.929973 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx8c9\" (UniqueName: \"kubernetes.io/projected/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-kube-api-access-hx8c9\") pod \"glance-default-external-api-0\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.930073 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.930103 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.930142 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.930170 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-config-data\") pod \"glance-default-external-api-0\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.930207 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.930273 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-logs\") pod \"glance-default-external-api-0\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.930298 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-scripts\") pod \"glance-default-external-api-0\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.930525 4619 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.931284 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.931318 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-logs\") pod \"glance-default-external-api-0\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.937722 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.942152 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.948688 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-scripts\") pod \"glance-default-external-api-0\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.954636 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-config-data\") pod \"glance-default-external-api-0\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.959902 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx8c9\" (UniqueName: \"kubernetes.io/projected/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-kube-api-access-hx8c9\") pod \"glance-default-external-api-0\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:49 crc kubenswrapper[4619]: I0126 11:12:49.986767 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " pod="openstack/glance-default-external-api-0" Jan 26 11:12:50 crc kubenswrapper[4619]: I0126 11:12:50.044059 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 11:12:50 crc kubenswrapper[4619]: I0126 11:12:50.621100 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c6cbcb8f7-2lvnl" event={"ID":"985466ba-fea8-4557-ae46-94d1cc13fee1","Type":"ContainerStarted","Data":"746dbfff98ad12add6d6b61b6c039ca48453a537c6058f64d6d339c5df5f880b"} Jan 26 11:12:50 crc kubenswrapper[4619]: I0126 11:12:50.623136 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5966e7e7-e6e5-49a0-8ced-11271a3b2325","Type":"ContainerStarted","Data":"cbbb07325d61630985c18eedc4ecbe2d498b42beef3fbc7f2e9910dd8f7666bf"} Jan 26 11:12:50 crc kubenswrapper[4619]: I0126 11:12:50.629385 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-s54b4" event={"ID":"74480c10-4eeb-4a66-99a0-82c49acf75d4","Type":"ContainerStarted","Data":"81ff015cadea4ceebbc379201a125a8398879a2727da4f4bffcf76c804875e13"} Jan 26 11:12:50 crc kubenswrapper[4619]: I0126 11:12:50.630819 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-s54b4" Jan 26 11:12:50 crc kubenswrapper[4619]: I0126 11:12:50.838970 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-s54b4" podStartSLOduration=6.838952949 podStartE2EDuration="6.838952949s" podCreationTimestamp="2026-01-26 11:12:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:12:50.655078091 +0000 UTC m=+1069.689118807" watchObservedRunningTime="2026-01-26 11:12:50.838952949 +0000 UTC m=+1069.872993665" Jan 26 11:12:50 crc kubenswrapper[4619]: I0126 11:12:50.839728 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:12:50 crc kubenswrapper[4619]: W0126 11:12:50.840592 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94255302_af1f_4a1c_b0ad_09e0ea70d1ba.slice/crio-deea4e2944fd14c6e0192ba440a12ee95ca99072292687455c4d5729d665b69b WatchSource:0}: Error finding container deea4e2944fd14c6e0192ba440a12ee95ca99072292687455c4d5729d665b69b: Status 404 returned error can't find the container with id deea4e2944fd14c6e0192ba440a12ee95ca99072292687455c4d5729d665b69b Jan 26 11:12:51 crc kubenswrapper[4619]: I0126 11:12:51.287419 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b351f780-1982-4290-8185-798eb12aecad" path="/var/lib/kubelet/pods/b351f780-1982-4290-8185-798eb12aecad/volumes" Jan 26 11:12:51 crc kubenswrapper[4619]: I0126 11:12:51.645261 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5966e7e7-e6e5-49a0-8ced-11271a3b2325","Type":"ContainerStarted","Data":"689628108ca2ac956e8ba144d48a3c4adb34e244854ceb0568fc01f95126f807"} Jan 26 11:12:51 crc kubenswrapper[4619]: I0126 11:12:51.645340 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5966e7e7-e6e5-49a0-8ced-11271a3b2325" containerName="glance-log" containerID="cri-o://cbbb07325d61630985c18eedc4ecbe2d498b42beef3fbc7f2e9910dd8f7666bf" gracePeriod=30 Jan 26 11:12:51 crc kubenswrapper[4619]: I0126 11:12:51.645374 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5966e7e7-e6e5-49a0-8ced-11271a3b2325" containerName="glance-httpd" containerID="cri-o://689628108ca2ac956e8ba144d48a3c4adb34e244854ceb0568fc01f95126f807" gracePeriod=30 Jan 26 11:12:51 crc kubenswrapper[4619]: I0126 11:12:51.648287 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94255302-af1f-4a1c-b0ad-09e0ea70d1ba","Type":"ContainerStarted","Data":"deea4e2944fd14c6e0192ba440a12ee95ca99072292687455c4d5729d665b69b"} Jan 26 11:12:51 crc kubenswrapper[4619]: I0126 11:12:51.695256 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.695241602 podStartE2EDuration="6.695241602s" podCreationTimestamp="2026-01-26 11:12:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:12:51.690974793 +0000 UTC m=+1070.725015509" watchObservedRunningTime="2026-01-26 11:12:51.695241602 +0000 UTC m=+1070.729282318" Jan 26 11:12:52 crc kubenswrapper[4619]: I0126 11:12:52.678196 4619 generic.go:334] "Generic (PLEG): container finished" podID="5966e7e7-e6e5-49a0-8ced-11271a3b2325" containerID="689628108ca2ac956e8ba144d48a3c4adb34e244854ceb0568fc01f95126f807" exitCode=143 Jan 26 11:12:52 crc kubenswrapper[4619]: I0126 11:12:52.678241 4619 generic.go:334] "Generic (PLEG): container finished" podID="5966e7e7-e6e5-49a0-8ced-11271a3b2325" containerID="cbbb07325d61630985c18eedc4ecbe2d498b42beef3fbc7f2e9910dd8f7666bf" exitCode=143 Jan 26 11:12:52 crc kubenswrapper[4619]: I0126 11:12:52.679122 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5966e7e7-e6e5-49a0-8ced-11271a3b2325","Type":"ContainerDied","Data":"689628108ca2ac956e8ba144d48a3c4adb34e244854ceb0568fc01f95126f807"} Jan 26 11:12:52 crc kubenswrapper[4619]: I0126 11:12:52.679148 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5966e7e7-e6e5-49a0-8ced-11271a3b2325","Type":"ContainerDied","Data":"cbbb07325d61630985c18eedc4ecbe2d498b42beef3fbc7f2e9910dd8f7666bf"} Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.363776 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.447900 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5966e7e7-e6e5-49a0-8ced-11271a3b2325-scripts\") pod \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.447981 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5966e7e7-e6e5-49a0-8ced-11271a3b2325-combined-ca-bundle\") pod \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.448011 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5966e7e7-e6e5-49a0-8ced-11271a3b2325-config-data\") pod \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.448063 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5966e7e7-e6e5-49a0-8ced-11271a3b2325-httpd-run\") pod \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.448096 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5966e7e7-e6e5-49a0-8ced-11271a3b2325-internal-tls-certs\") pod \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.448121 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5966e7e7-e6e5-49a0-8ced-11271a3b2325-logs\") pod \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.449087 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.449118 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrrpj\" (UniqueName: \"kubernetes.io/projected/5966e7e7-e6e5-49a0-8ced-11271a3b2325-kube-api-access-rrrpj\") pod \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\" (UID: \"5966e7e7-e6e5-49a0-8ced-11271a3b2325\") " Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.448444 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5966e7e7-e6e5-49a0-8ced-11271a3b2325-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5966e7e7-e6e5-49a0-8ced-11271a3b2325" (UID: "5966e7e7-e6e5-49a0-8ced-11271a3b2325"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.448503 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5966e7e7-e6e5-49a0-8ced-11271a3b2325-logs" (OuterVolumeSpecName: "logs") pod "5966e7e7-e6e5-49a0-8ced-11271a3b2325" (UID: "5966e7e7-e6e5-49a0-8ced-11271a3b2325"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.449587 4619 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5966e7e7-e6e5-49a0-8ced-11271a3b2325-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.449609 4619 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5966e7e7-e6e5-49a0-8ced-11271a3b2325-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.454758 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "5966e7e7-e6e5-49a0-8ced-11271a3b2325" (UID: "5966e7e7-e6e5-49a0-8ced-11271a3b2325"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.455997 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5966e7e7-e6e5-49a0-8ced-11271a3b2325-kube-api-access-rrrpj" (OuterVolumeSpecName: "kube-api-access-rrrpj") pod "5966e7e7-e6e5-49a0-8ced-11271a3b2325" (UID: "5966e7e7-e6e5-49a0-8ced-11271a3b2325"). InnerVolumeSpecName "kube-api-access-rrrpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.456856 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5966e7e7-e6e5-49a0-8ced-11271a3b2325-scripts" (OuterVolumeSpecName: "scripts") pod "5966e7e7-e6e5-49a0-8ced-11271a3b2325" (UID: "5966e7e7-e6e5-49a0-8ced-11271a3b2325"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.516215 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5966e7e7-e6e5-49a0-8ced-11271a3b2325-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5966e7e7-e6e5-49a0-8ced-11271a3b2325" (UID: "5966e7e7-e6e5-49a0-8ced-11271a3b2325"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.553077 4619 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.553126 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrrpj\" (UniqueName: \"kubernetes.io/projected/5966e7e7-e6e5-49a0-8ced-11271a3b2325-kube-api-access-rrrpj\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.553139 4619 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5966e7e7-e6e5-49a0-8ced-11271a3b2325-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.553149 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5966e7e7-e6e5-49a0-8ced-11271a3b2325-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.557907 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5966e7e7-e6e5-49a0-8ced-11271a3b2325-config-data" (OuterVolumeSpecName: "config-data") pod "5966e7e7-e6e5-49a0-8ced-11271a3b2325" (UID: "5966e7e7-e6e5-49a0-8ced-11271a3b2325"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.574438 4619 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.650759 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5966e7e7-e6e5-49a0-8ced-11271a3b2325-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5966e7e7-e6e5-49a0-8ced-11271a3b2325" (UID: "5966e7e7-e6e5-49a0-8ced-11271a3b2325"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.655242 4619 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.655276 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5966e7e7-e6e5-49a0-8ced-11271a3b2325-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.655293 4619 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5966e7e7-e6e5-49a0-8ced-11271a3b2325-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.715744 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5966e7e7-e6e5-49a0-8ced-11271a3b2325","Type":"ContainerDied","Data":"3d79f823a06db1709a92ecd0847bdfa79fae10df014e2b26b535246461f95162"} Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.715795 4619 scope.go:117] "RemoveContainer" containerID="689628108ca2ac956e8ba144d48a3c4adb34e244854ceb0568fc01f95126f807" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.715897 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.804601 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94255302-af1f-4a1c-b0ad-09e0ea70d1ba","Type":"ContainerStarted","Data":"0f292513a86f05552ff13ebb5ba604c9ed03605424e7c674d93a4e2082792118"} Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.811990 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.854659 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.880555 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:12:53 crc kubenswrapper[4619]: E0126 11:12:53.880968 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5966e7e7-e6e5-49a0-8ced-11271a3b2325" containerName="glance-log" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.880982 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="5966e7e7-e6e5-49a0-8ced-11271a3b2325" containerName="glance-log" Jan 26 11:12:53 crc kubenswrapper[4619]: E0126 11:12:53.881016 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5966e7e7-e6e5-49a0-8ced-11271a3b2325" containerName="glance-httpd" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.881022 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="5966e7e7-e6e5-49a0-8ced-11271a3b2325" containerName="glance-httpd" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.881190 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="5966e7e7-e6e5-49a0-8ced-11271a3b2325" containerName="glance-log" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.881202 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="5966e7e7-e6e5-49a0-8ced-11271a3b2325" containerName="glance-httpd" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.882140 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.888533 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.892582 4619 scope.go:117] "RemoveContainer" containerID="cbbb07325d61630985c18eedc4ecbe2d498b42beef3fbc7f2e9910dd8f7666bf" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.893447 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.893647 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.984521 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1557a03a-428e-4a8c-8ddb-634add71a69f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.984574 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.985157 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1557a03a-428e-4a8c-8ddb-634add71a69f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.985189 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1557a03a-428e-4a8c-8ddb-634add71a69f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.985319 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1557a03a-428e-4a8c-8ddb-634add71a69f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.985350 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzvhv\" (UniqueName: \"kubernetes.io/projected/1557a03a-428e-4a8c-8ddb-634add71a69f-kube-api-access-xzvhv\") pod \"glance-default-internal-api-0\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.985405 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1557a03a-428e-4a8c-8ddb-634add71a69f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:53 crc kubenswrapper[4619]: I0126 11:12:53.985485 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1557a03a-428e-4a8c-8ddb-634add71a69f-logs\") pod \"glance-default-internal-api-0\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.087424 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1557a03a-428e-4a8c-8ddb-634add71a69f-logs\") pod \"glance-default-internal-api-0\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.087524 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1557a03a-428e-4a8c-8ddb-634add71a69f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.087549 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.087590 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1557a03a-428e-4a8c-8ddb-634add71a69f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.087624 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1557a03a-428e-4a8c-8ddb-634add71a69f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.087663 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1557a03a-428e-4a8c-8ddb-634add71a69f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.087690 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzvhv\" (UniqueName: \"kubernetes.io/projected/1557a03a-428e-4a8c-8ddb-634add71a69f-kube-api-access-xzvhv\") pod \"glance-default-internal-api-0\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.087719 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1557a03a-428e-4a8c-8ddb-634add71a69f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.088306 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1557a03a-428e-4a8c-8ddb-634add71a69f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.090634 4619 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.091179 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1557a03a-428e-4a8c-8ddb-634add71a69f-logs\") pod \"glance-default-internal-api-0\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.093385 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1557a03a-428e-4a8c-8ddb-634add71a69f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.093635 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1557a03a-428e-4a8c-8ddb-634add71a69f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.096214 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1557a03a-428e-4a8c-8ddb-634add71a69f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.096994 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1557a03a-428e-4a8c-8ddb-634add71a69f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.113964 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzvhv\" (UniqueName: \"kubernetes.io/projected/1557a03a-428e-4a8c-8ddb-634add71a69f-kube-api-access-xzvhv\") pod \"glance-default-internal-api-0\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.131673 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.230811 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.717733 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-77656968b5-6fspz"] Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.756925 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6f67c775d4-7ls4r"] Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.758236 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f67c775d4-7ls4r" Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.760424 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.792147 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f67c775d4-7ls4r"] Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.823760 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.845883 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94255302-af1f-4a1c-b0ad-09e0ea70d1ba","Type":"ContainerStarted","Data":"10c71bef6b2ee4a00cf7a177c097074767015c39943c7ed0639d680089dde071"} Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.893890 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.89387092 podStartE2EDuration="5.89387092s" podCreationTimestamp="2026-01-26 11:12:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:12:54.893337175 +0000 UTC m=+1073.927377911" watchObservedRunningTime="2026-01-26 11:12:54.89387092 +0000 UTC m=+1073.927911636" Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.903144 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-config-data\") pod \"horizon-6f67c775d4-7ls4r\" (UID: \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\") " pod="openstack/horizon-6f67c775d4-7ls4r" Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.903222 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-horizon-secret-key\") pod \"horizon-6f67c775d4-7ls4r\" (UID: \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\") " pod="openstack/horizon-6f67c775d4-7ls4r" Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.903249 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-horizon-tls-certs\") pod \"horizon-6f67c775d4-7ls4r\" (UID: \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\") " pod="openstack/horizon-6f67c775d4-7ls4r" Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.903295 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-668ht\" (UniqueName: \"kubernetes.io/projected/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-kube-api-access-668ht\") pod \"horizon-6f67c775d4-7ls4r\" (UID: \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\") " pod="openstack/horizon-6f67c775d4-7ls4r" Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.903347 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-logs\") pod \"horizon-6f67c775d4-7ls4r\" (UID: \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\") " pod="openstack/horizon-6f67c775d4-7ls4r" Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.903386 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-scripts\") pod \"horizon-6f67c775d4-7ls4r\" (UID: \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\") " pod="openstack/horizon-6f67c775d4-7ls4r" Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.903407 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-combined-ca-bundle\") pod \"horizon-6f67c775d4-7ls4r\" (UID: \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\") " pod="openstack/horizon-6f67c775d4-7ls4r" Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.931501 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.940666 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6c6cbcb8f7-2lvnl"] Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.959799 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-846d64d6c4-66jvl"] Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.975279 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-846d64d6c4-66jvl" Jan 26 11:12:54 crc kubenswrapper[4619]: I0126 11:12:54.981519 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-846d64d6c4-66jvl"] Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.006054 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-horizon-secret-key\") pod \"horizon-6f67c775d4-7ls4r\" (UID: \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\") " pod="openstack/horizon-6f67c775d4-7ls4r" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.006100 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-horizon-tls-certs\") pod \"horizon-6f67c775d4-7ls4r\" (UID: \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\") " pod="openstack/horizon-6f67c775d4-7ls4r" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.006167 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-668ht\" (UniqueName: \"kubernetes.io/projected/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-kube-api-access-668ht\") pod \"horizon-6f67c775d4-7ls4r\" (UID: \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\") " pod="openstack/horizon-6f67c775d4-7ls4r" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.007746 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-logs\") pod \"horizon-6f67c775d4-7ls4r\" (UID: \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\") " pod="openstack/horizon-6f67c775d4-7ls4r" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.007813 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-scripts\") pod \"horizon-6f67c775d4-7ls4r\" (UID: \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\") " pod="openstack/horizon-6f67c775d4-7ls4r" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.007844 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-combined-ca-bundle\") pod \"horizon-6f67c775d4-7ls4r\" (UID: \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\") " pod="openstack/horizon-6f67c775d4-7ls4r" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.007935 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-config-data\") pod \"horizon-6f67c775d4-7ls4r\" (UID: \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\") " pod="openstack/horizon-6f67c775d4-7ls4r" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.018107 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-logs\") pod \"horizon-6f67c775d4-7ls4r\" (UID: \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\") " pod="openstack/horizon-6f67c775d4-7ls4r" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.022864 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-combined-ca-bundle\") pod \"horizon-6f67c775d4-7ls4r\" (UID: \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\") " pod="openstack/horizon-6f67c775d4-7ls4r" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.027864 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-scripts\") pod \"horizon-6f67c775d4-7ls4r\" (UID: \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\") " pod="openstack/horizon-6f67c775d4-7ls4r" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.030064 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-horizon-tls-certs\") pod \"horizon-6f67c775d4-7ls4r\" (UID: \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\") " pod="openstack/horizon-6f67c775d4-7ls4r" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.030501 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-horizon-secret-key\") pod \"horizon-6f67c775d4-7ls4r\" (UID: \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\") " pod="openstack/horizon-6f67c775d4-7ls4r" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.040771 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-668ht\" (UniqueName: \"kubernetes.io/projected/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-kube-api-access-668ht\") pod \"horizon-6f67c775d4-7ls4r\" (UID: \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\") " pod="openstack/horizon-6f67c775d4-7ls4r" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.046434 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-config-data\") pod \"horizon-6f67c775d4-7ls4r\" (UID: \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\") " pod="openstack/horizon-6f67c775d4-7ls4r" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.100842 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f67c775d4-7ls4r" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.109804 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfnc9\" (UniqueName: \"kubernetes.io/projected/10c8ed10-dab5-49e5-a030-4be99c720ae0-kube-api-access-rfnc9\") pod \"horizon-846d64d6c4-66jvl\" (UID: \"10c8ed10-dab5-49e5-a030-4be99c720ae0\") " pod="openstack/horizon-846d64d6c4-66jvl" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.111196 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/10c8ed10-dab5-49e5-a030-4be99c720ae0-horizon-secret-key\") pod \"horizon-846d64d6c4-66jvl\" (UID: \"10c8ed10-dab5-49e5-a030-4be99c720ae0\") " pod="openstack/horizon-846d64d6c4-66jvl" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.111230 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10c8ed10-dab5-49e5-a030-4be99c720ae0-scripts\") pod \"horizon-846d64d6c4-66jvl\" (UID: \"10c8ed10-dab5-49e5-a030-4be99c720ae0\") " pod="openstack/horizon-846d64d6c4-66jvl" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.111312 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/10c8ed10-dab5-49e5-a030-4be99c720ae0-config-data\") pod \"horizon-846d64d6c4-66jvl\" (UID: \"10c8ed10-dab5-49e5-a030-4be99c720ae0\") " pod="openstack/horizon-846d64d6c4-66jvl" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.111338 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10c8ed10-dab5-49e5-a030-4be99c720ae0-logs\") pod \"horizon-846d64d6c4-66jvl\" (UID: \"10c8ed10-dab5-49e5-a030-4be99c720ae0\") " pod="openstack/horizon-846d64d6c4-66jvl" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.111485 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/10c8ed10-dab5-49e5-a030-4be99c720ae0-horizon-tls-certs\") pod \"horizon-846d64d6c4-66jvl\" (UID: \"10c8ed10-dab5-49e5-a030-4be99c720ae0\") " pod="openstack/horizon-846d64d6c4-66jvl" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.111593 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c8ed10-dab5-49e5-a030-4be99c720ae0-combined-ca-bundle\") pod \"horizon-846d64d6c4-66jvl\" (UID: \"10c8ed10-dab5-49e5-a030-4be99c720ae0\") " pod="openstack/horizon-846d64d6c4-66jvl" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.213604 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c8ed10-dab5-49e5-a030-4be99c720ae0-combined-ca-bundle\") pod \"horizon-846d64d6c4-66jvl\" (UID: \"10c8ed10-dab5-49e5-a030-4be99c720ae0\") " pod="openstack/horizon-846d64d6c4-66jvl" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.213684 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfnc9\" (UniqueName: \"kubernetes.io/projected/10c8ed10-dab5-49e5-a030-4be99c720ae0-kube-api-access-rfnc9\") pod \"horizon-846d64d6c4-66jvl\" (UID: \"10c8ed10-dab5-49e5-a030-4be99c720ae0\") " pod="openstack/horizon-846d64d6c4-66jvl" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.213731 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/10c8ed10-dab5-49e5-a030-4be99c720ae0-horizon-secret-key\") pod \"horizon-846d64d6c4-66jvl\" (UID: \"10c8ed10-dab5-49e5-a030-4be99c720ae0\") " pod="openstack/horizon-846d64d6c4-66jvl" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.213755 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10c8ed10-dab5-49e5-a030-4be99c720ae0-scripts\") pod \"horizon-846d64d6c4-66jvl\" (UID: \"10c8ed10-dab5-49e5-a030-4be99c720ae0\") " pod="openstack/horizon-846d64d6c4-66jvl" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.213789 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/10c8ed10-dab5-49e5-a030-4be99c720ae0-config-data\") pod \"horizon-846d64d6c4-66jvl\" (UID: \"10c8ed10-dab5-49e5-a030-4be99c720ae0\") " pod="openstack/horizon-846d64d6c4-66jvl" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.213813 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10c8ed10-dab5-49e5-a030-4be99c720ae0-logs\") pod \"horizon-846d64d6c4-66jvl\" (UID: \"10c8ed10-dab5-49e5-a030-4be99c720ae0\") " pod="openstack/horizon-846d64d6c4-66jvl" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.213857 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/10c8ed10-dab5-49e5-a030-4be99c720ae0-horizon-tls-certs\") pod \"horizon-846d64d6c4-66jvl\" (UID: \"10c8ed10-dab5-49e5-a030-4be99c720ae0\") " pod="openstack/horizon-846d64d6c4-66jvl" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.218268 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10c8ed10-dab5-49e5-a030-4be99c720ae0-logs\") pod \"horizon-846d64d6c4-66jvl\" (UID: \"10c8ed10-dab5-49e5-a030-4be99c720ae0\") " pod="openstack/horizon-846d64d6c4-66jvl" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.218751 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10c8ed10-dab5-49e5-a030-4be99c720ae0-scripts\") pod \"horizon-846d64d6c4-66jvl\" (UID: \"10c8ed10-dab5-49e5-a030-4be99c720ae0\") " pod="openstack/horizon-846d64d6c4-66jvl" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.219236 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/10c8ed10-dab5-49e5-a030-4be99c720ae0-config-data\") pod \"horizon-846d64d6c4-66jvl\" (UID: \"10c8ed10-dab5-49e5-a030-4be99c720ae0\") " pod="openstack/horizon-846d64d6c4-66jvl" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.226195 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c8ed10-dab5-49e5-a030-4be99c720ae0-combined-ca-bundle\") pod \"horizon-846d64d6c4-66jvl\" (UID: \"10c8ed10-dab5-49e5-a030-4be99c720ae0\") " pod="openstack/horizon-846d64d6c4-66jvl" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.230951 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/10c8ed10-dab5-49e5-a030-4be99c720ae0-horizon-secret-key\") pod \"horizon-846d64d6c4-66jvl\" (UID: \"10c8ed10-dab5-49e5-a030-4be99c720ae0\") " pod="openstack/horizon-846d64d6c4-66jvl" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.232348 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfnc9\" (UniqueName: \"kubernetes.io/projected/10c8ed10-dab5-49e5-a030-4be99c720ae0-kube-api-access-rfnc9\") pod \"horizon-846d64d6c4-66jvl\" (UID: \"10c8ed10-dab5-49e5-a030-4be99c720ae0\") " pod="openstack/horizon-846d64d6c4-66jvl" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.247605 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/10c8ed10-dab5-49e5-a030-4be99c720ae0-horizon-tls-certs\") pod \"horizon-846d64d6c4-66jvl\" (UID: \"10c8ed10-dab5-49e5-a030-4be99c720ae0\") " pod="openstack/horizon-846d64d6c4-66jvl" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.289901 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5966e7e7-e6e5-49a0-8ced-11271a3b2325" path="/var/lib/kubelet/pods/5966e7e7-e6e5-49a0-8ced-11271a3b2325/volumes" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.305073 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-846d64d6c4-66jvl" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.507764 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-s54b4" Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.665906 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-j8vh8"] Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.666131 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-j8vh8" podUID="9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb" containerName="dnsmasq-dns" containerID="cri-o://1262c33e48ef3ad40e73b2b2b91e37af1a0d765d631b0d5876d287d2c8b278c5" gracePeriod=10 Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.870896 4619 generic.go:334] "Generic (PLEG): container finished" podID="e3823a07-f914-4121-b3cd-2f3b4a272480" containerID="66a6d798b2456c3065d4a03defe6cbd9ab5e5227621f280fde91db675e2f83aa" exitCode=0 Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.870958 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p74bp" event={"ID":"e3823a07-f914-4121-b3cd-2f3b4a272480","Type":"ContainerDied","Data":"66a6d798b2456c3065d4a03defe6cbd9ab5e5227621f280fde91db675e2f83aa"} Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.898875 4619 generic.go:334] "Generic (PLEG): container finished" podID="9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb" containerID="1262c33e48ef3ad40e73b2b2b91e37af1a0d765d631b0d5876d287d2c8b278c5" exitCode=0 Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.899076 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="94255302-af1f-4a1c-b0ad-09e0ea70d1ba" containerName="glance-log" containerID="cri-o://0f292513a86f05552ff13ebb5ba604c9ed03605424e7c674d93a4e2082792118" gracePeriod=30 Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.899358 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-j8vh8" event={"ID":"9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb","Type":"ContainerDied","Data":"1262c33e48ef3ad40e73b2b2b91e37af1a0d765d631b0d5876d287d2c8b278c5"} Jan 26 11:12:55 crc kubenswrapper[4619]: I0126 11:12:55.899419 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="94255302-af1f-4a1c-b0ad-09e0ea70d1ba" containerName="glance-httpd" containerID="cri-o://10c71bef6b2ee4a00cf7a177c097074767015c39943c7ed0639d680089dde071" gracePeriod=30 Jan 26 11:12:56 crc kubenswrapper[4619]: I0126 11:12:56.567591 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-j8vh8" podUID="9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Jan 26 11:12:56 crc kubenswrapper[4619]: I0126 11:12:56.939135 4619 generic.go:334] "Generic (PLEG): container finished" podID="94255302-af1f-4a1c-b0ad-09e0ea70d1ba" containerID="10c71bef6b2ee4a00cf7a177c097074767015c39943c7ed0639d680089dde071" exitCode=0 Jan 26 11:12:56 crc kubenswrapper[4619]: I0126 11:12:56.939621 4619 generic.go:334] "Generic (PLEG): container finished" podID="94255302-af1f-4a1c-b0ad-09e0ea70d1ba" containerID="0f292513a86f05552ff13ebb5ba604c9ed03605424e7c674d93a4e2082792118" exitCode=143 Jan 26 11:12:56 crc kubenswrapper[4619]: I0126 11:12:56.939335 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94255302-af1f-4a1c-b0ad-09e0ea70d1ba","Type":"ContainerDied","Data":"10c71bef6b2ee4a00cf7a177c097074767015c39943c7ed0639d680089dde071"} Jan 26 11:12:56 crc kubenswrapper[4619]: I0126 11:12:56.939872 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94255302-af1f-4a1c-b0ad-09e0ea70d1ba","Type":"ContainerDied","Data":"0f292513a86f05552ff13ebb5ba604c9ed03605424e7c674d93a4e2082792118"} Jan 26 11:12:58 crc kubenswrapper[4619]: I0126 11:12:58.150605 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p74bp" Jan 26 11:12:58 crc kubenswrapper[4619]: I0126 11:12:58.206271 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfhc7\" (UniqueName: \"kubernetes.io/projected/e3823a07-f914-4121-b3cd-2f3b4a272480-kube-api-access-vfhc7\") pod \"e3823a07-f914-4121-b3cd-2f3b4a272480\" (UID: \"e3823a07-f914-4121-b3cd-2f3b4a272480\") " Jan 26 11:12:58 crc kubenswrapper[4619]: I0126 11:12:58.207098 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3823a07-f914-4121-b3cd-2f3b4a272480-config-data\") pod \"e3823a07-f914-4121-b3cd-2f3b4a272480\" (UID: \"e3823a07-f914-4121-b3cd-2f3b4a272480\") " Jan 26 11:12:58 crc kubenswrapper[4619]: I0126 11:12:58.207131 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e3823a07-f914-4121-b3cd-2f3b4a272480-credential-keys\") pod \"e3823a07-f914-4121-b3cd-2f3b4a272480\" (UID: \"e3823a07-f914-4121-b3cd-2f3b4a272480\") " Jan 26 11:12:58 crc kubenswrapper[4619]: I0126 11:12:58.207176 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3823a07-f914-4121-b3cd-2f3b4a272480-combined-ca-bundle\") pod \"e3823a07-f914-4121-b3cd-2f3b4a272480\" (UID: \"e3823a07-f914-4121-b3cd-2f3b4a272480\") " Jan 26 11:12:58 crc kubenswrapper[4619]: I0126 11:12:58.207255 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3823a07-f914-4121-b3cd-2f3b4a272480-scripts\") pod \"e3823a07-f914-4121-b3cd-2f3b4a272480\" (UID: \"e3823a07-f914-4121-b3cd-2f3b4a272480\") " Jan 26 11:12:58 crc kubenswrapper[4619]: I0126 11:12:58.207325 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3823a07-f914-4121-b3cd-2f3b4a272480-fernet-keys\") pod \"e3823a07-f914-4121-b3cd-2f3b4a272480\" (UID: \"e3823a07-f914-4121-b3cd-2f3b4a272480\") " Jan 26 11:12:58 crc kubenswrapper[4619]: I0126 11:12:58.223193 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3823a07-f914-4121-b3cd-2f3b4a272480-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e3823a07-f914-4121-b3cd-2f3b4a272480" (UID: "e3823a07-f914-4121-b3cd-2f3b4a272480"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:12:58 crc kubenswrapper[4619]: I0126 11:12:58.224465 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3823a07-f914-4121-b3cd-2f3b4a272480-scripts" (OuterVolumeSpecName: "scripts") pod "e3823a07-f914-4121-b3cd-2f3b4a272480" (UID: "e3823a07-f914-4121-b3cd-2f3b4a272480"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:12:58 crc kubenswrapper[4619]: I0126 11:12:58.225474 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3823a07-f914-4121-b3cd-2f3b4a272480-kube-api-access-vfhc7" (OuterVolumeSpecName: "kube-api-access-vfhc7") pod "e3823a07-f914-4121-b3cd-2f3b4a272480" (UID: "e3823a07-f914-4121-b3cd-2f3b4a272480"). InnerVolumeSpecName "kube-api-access-vfhc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:12:58 crc kubenswrapper[4619]: I0126 11:12:58.245205 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3823a07-f914-4121-b3cd-2f3b4a272480-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e3823a07-f914-4121-b3cd-2f3b4a272480" (UID: "e3823a07-f914-4121-b3cd-2f3b4a272480"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:12:58 crc kubenswrapper[4619]: I0126 11:12:58.252973 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3823a07-f914-4121-b3cd-2f3b4a272480-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3823a07-f914-4121-b3cd-2f3b4a272480" (UID: "e3823a07-f914-4121-b3cd-2f3b4a272480"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:12:58 crc kubenswrapper[4619]: I0126 11:12:58.268785 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3823a07-f914-4121-b3cd-2f3b4a272480-config-data" (OuterVolumeSpecName: "config-data") pod "e3823a07-f914-4121-b3cd-2f3b4a272480" (UID: "e3823a07-f914-4121-b3cd-2f3b4a272480"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:12:58 crc kubenswrapper[4619]: I0126 11:12:58.310552 4619 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3823a07-f914-4121-b3cd-2f3b4a272480-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:58 crc kubenswrapper[4619]: I0126 11:12:58.310580 4619 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3823a07-f914-4121-b3cd-2f3b4a272480-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:58 crc kubenswrapper[4619]: I0126 11:12:58.310589 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfhc7\" (UniqueName: \"kubernetes.io/projected/e3823a07-f914-4121-b3cd-2f3b4a272480-kube-api-access-vfhc7\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:58 crc kubenswrapper[4619]: I0126 11:12:58.310598 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3823a07-f914-4121-b3cd-2f3b4a272480-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:58 crc kubenswrapper[4619]: I0126 11:12:58.310606 4619 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e3823a07-f914-4121-b3cd-2f3b4a272480-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:58 crc kubenswrapper[4619]: I0126 11:12:58.310628 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3823a07-f914-4121-b3cd-2f3b4a272480-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:12:58 crc kubenswrapper[4619]: I0126 11:12:58.980862 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p74bp" event={"ID":"e3823a07-f914-4121-b3cd-2f3b4a272480","Type":"ContainerDied","Data":"b2a244716aba4053a591a01d4c67d71311664f08b0ae6a756bc9b5ef1ea35751"} Jan 26 11:12:58 crc kubenswrapper[4619]: I0126 11:12:58.980907 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2a244716aba4053a591a01d4c67d71311664f08b0ae6a756bc9b5ef1ea35751" Jan 26 11:12:58 crc kubenswrapper[4619]: I0126 11:12:58.980988 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p74bp" Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.228854 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-p74bp"] Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.236582 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-p74bp"] Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.273458 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3823a07-f914-4121-b3cd-2f3b4a272480" path="/var/lib/kubelet/pods/e3823a07-f914-4121-b3cd-2f3b4a272480/volumes" Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.326182 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-ltrbg"] Jan 26 11:12:59 crc kubenswrapper[4619]: E0126 11:12:59.326826 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3823a07-f914-4121-b3cd-2f3b4a272480" containerName="keystone-bootstrap" Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.371742 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3823a07-f914-4121-b3cd-2f3b4a272480" containerName="keystone-bootstrap" Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.372327 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3823a07-f914-4121-b3cd-2f3b4a272480" containerName="keystone-bootstrap" Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.373051 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ltrbg" Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.374002 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ltrbg"] Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.375544 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.375836 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.376037 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.376155 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2hx4x" Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.376915 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.430905 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3628aeb0-1e2e-4275-a914-31e18f47a989-config-data\") pod \"keystone-bootstrap-ltrbg\" (UID: \"3628aeb0-1e2e-4275-a914-31e18f47a989\") " pod="openstack/keystone-bootstrap-ltrbg" Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.430998 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3628aeb0-1e2e-4275-a914-31e18f47a989-fernet-keys\") pod \"keystone-bootstrap-ltrbg\" (UID: \"3628aeb0-1e2e-4275-a914-31e18f47a989\") " pod="openstack/keystone-bootstrap-ltrbg" Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.431085 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3628aeb0-1e2e-4275-a914-31e18f47a989-credential-keys\") pod \"keystone-bootstrap-ltrbg\" (UID: \"3628aeb0-1e2e-4275-a914-31e18f47a989\") " pod="openstack/keystone-bootstrap-ltrbg" Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.431163 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3628aeb0-1e2e-4275-a914-31e18f47a989-scripts\") pod \"keystone-bootstrap-ltrbg\" (UID: \"3628aeb0-1e2e-4275-a914-31e18f47a989\") " pod="openstack/keystone-bootstrap-ltrbg" Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.431284 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3628aeb0-1e2e-4275-a914-31e18f47a989-combined-ca-bundle\") pod \"keystone-bootstrap-ltrbg\" (UID: \"3628aeb0-1e2e-4275-a914-31e18f47a989\") " pod="openstack/keystone-bootstrap-ltrbg" Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.431391 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb9bv\" (UniqueName: \"kubernetes.io/projected/3628aeb0-1e2e-4275-a914-31e18f47a989-kube-api-access-wb9bv\") pod \"keystone-bootstrap-ltrbg\" (UID: \"3628aeb0-1e2e-4275-a914-31e18f47a989\") " pod="openstack/keystone-bootstrap-ltrbg" Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.532909 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3628aeb0-1e2e-4275-a914-31e18f47a989-credential-keys\") pod \"keystone-bootstrap-ltrbg\" (UID: \"3628aeb0-1e2e-4275-a914-31e18f47a989\") " pod="openstack/keystone-bootstrap-ltrbg" Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.532992 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3628aeb0-1e2e-4275-a914-31e18f47a989-scripts\") pod \"keystone-bootstrap-ltrbg\" (UID: \"3628aeb0-1e2e-4275-a914-31e18f47a989\") " pod="openstack/keystone-bootstrap-ltrbg" Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.533037 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3628aeb0-1e2e-4275-a914-31e18f47a989-combined-ca-bundle\") pod \"keystone-bootstrap-ltrbg\" (UID: \"3628aeb0-1e2e-4275-a914-31e18f47a989\") " pod="openstack/keystone-bootstrap-ltrbg" Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.533133 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb9bv\" (UniqueName: \"kubernetes.io/projected/3628aeb0-1e2e-4275-a914-31e18f47a989-kube-api-access-wb9bv\") pod \"keystone-bootstrap-ltrbg\" (UID: \"3628aeb0-1e2e-4275-a914-31e18f47a989\") " pod="openstack/keystone-bootstrap-ltrbg" Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.533169 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3628aeb0-1e2e-4275-a914-31e18f47a989-config-data\") pod \"keystone-bootstrap-ltrbg\" (UID: \"3628aeb0-1e2e-4275-a914-31e18f47a989\") " pod="openstack/keystone-bootstrap-ltrbg" Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.533195 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3628aeb0-1e2e-4275-a914-31e18f47a989-fernet-keys\") pod \"keystone-bootstrap-ltrbg\" (UID: \"3628aeb0-1e2e-4275-a914-31e18f47a989\") " pod="openstack/keystone-bootstrap-ltrbg" Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.543140 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3628aeb0-1e2e-4275-a914-31e18f47a989-scripts\") pod \"keystone-bootstrap-ltrbg\" (UID: \"3628aeb0-1e2e-4275-a914-31e18f47a989\") " pod="openstack/keystone-bootstrap-ltrbg" Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.544329 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3628aeb0-1e2e-4275-a914-31e18f47a989-fernet-keys\") pod \"keystone-bootstrap-ltrbg\" (UID: \"3628aeb0-1e2e-4275-a914-31e18f47a989\") " pod="openstack/keystone-bootstrap-ltrbg" Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.544369 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3628aeb0-1e2e-4275-a914-31e18f47a989-config-data\") pod \"keystone-bootstrap-ltrbg\" (UID: \"3628aeb0-1e2e-4275-a914-31e18f47a989\") " pod="openstack/keystone-bootstrap-ltrbg" Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.548916 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3628aeb0-1e2e-4275-a914-31e18f47a989-credential-keys\") pod \"keystone-bootstrap-ltrbg\" (UID: \"3628aeb0-1e2e-4275-a914-31e18f47a989\") " pod="openstack/keystone-bootstrap-ltrbg" Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.560720 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3628aeb0-1e2e-4275-a914-31e18f47a989-combined-ca-bundle\") pod \"keystone-bootstrap-ltrbg\" (UID: \"3628aeb0-1e2e-4275-a914-31e18f47a989\") " pod="openstack/keystone-bootstrap-ltrbg" Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.561358 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb9bv\" (UniqueName: \"kubernetes.io/projected/3628aeb0-1e2e-4275-a914-31e18f47a989-kube-api-access-wb9bv\") pod \"keystone-bootstrap-ltrbg\" (UID: \"3628aeb0-1e2e-4275-a914-31e18f47a989\") " pod="openstack/keystone-bootstrap-ltrbg" Jan 26 11:12:59 crc kubenswrapper[4619]: I0126 11:12:59.693286 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ltrbg" Jan 26 11:13:06 crc kubenswrapper[4619]: I0126 11:13:06.568482 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-j8vh8" podUID="9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: i/o timeout" Jan 26 11:13:07 crc kubenswrapper[4619]: E0126 11:13:07.527169 4619 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 26 11:13:07 crc kubenswrapper[4619]: E0126 11:13:07.527559 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n64bh546h59bh64bhbdh5ffh9ch58chf5h5c4h579h7bh657h548h58dhf6h5f9h559h68h5b8h669hbch64dh578h5c6h65bhdh649h57dh65h564h646q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9hrh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6c6cbcb8f7-2lvnl_openstack(985466ba-fea8-4557-ae46-94d1cc13fee1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:13:07 crc kubenswrapper[4619]: E0126 11:13:07.532355 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6c6cbcb8f7-2lvnl" podUID="985466ba-fea8-4557-ae46-94d1cc13fee1" Jan 26 11:13:07 crc kubenswrapper[4619]: E0126 11:13:07.581726 4619 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 26 11:13:07 crc kubenswrapper[4619]: E0126 11:13:07.581913 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n54dhf5h75h6h5f4hf7h6bh84h64bh5cch66bh698hbbh5h556h88h64fh55ch597h65dh55dhbdhf6hfh685h597h597h564hffh58dhc4h554q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsxlt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-58f6c6c467-v5nzg_openstack(4ef5cd80-2bd8-445c-bb69-79c53f8c888d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:13:07 crc kubenswrapper[4619]: E0126 11:13:07.585485 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-58f6c6c467-v5nzg" podUID="4ef5cd80-2bd8-445c-bb69-79c53f8c888d" Jan 26 11:13:09 crc kubenswrapper[4619]: E0126 11:13:09.397311 4619 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 26 11:13:09 crc kubenswrapper[4619]: E0126 11:13:09.398243 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f4hdch5f8h58dh7fh5b9h6ch56h67hb7h697h5fh5f6h68ch566h654h59fhf9h5b6hc5h5f5h5b7h694h5d4h66ch5bfh7h8bh96hcfh56bh654q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9n2h9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-77656968b5-6fspz_openstack(65f96a5d-423f-4959-8115-333b47907fd7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:13:09 crc kubenswrapper[4619]: E0126 11:13:09.402812 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-77656968b5-6fspz" podUID="65f96a5d-423f-4959-8115-333b47907fd7" Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.512565 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-j8vh8" Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.541136 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.646376 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx8c9\" (UniqueName: \"kubernetes.io/projected/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-kube-api-access-hx8c9\") pod \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.646452 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-config-data\") pod \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.646640 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb-config\") pod \"9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb\" (UID: \"9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb\") " Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.646678 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb-dns-svc\") pod \"9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb\" (UID: \"9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb\") " Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.646766 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb-ovsdbserver-nb\") pod \"9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb\" (UID: \"9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb\") " Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.646874 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-httpd-run\") pod \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.646901 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-combined-ca-bundle\") pod \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.647022 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb-ovsdbserver-sb\") pod \"9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb\" (UID: \"9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb\") " Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.647485 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.647534 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-logs\") pod \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.647553 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skjbv\" (UniqueName: \"kubernetes.io/projected/9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb-kube-api-access-skjbv\") pod \"9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb\" (UID: \"9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb\") " Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.647605 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-scripts\") pod \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.647640 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-public-tls-certs\") pod \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\" (UID: \"94255302-af1f-4a1c-b0ad-09e0ea70d1ba\") " Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.648415 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "94255302-af1f-4a1c-b0ad-09e0ea70d1ba" (UID: "94255302-af1f-4a1c-b0ad-09e0ea70d1ba"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.653439 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-logs" (OuterVolumeSpecName: "logs") pod "94255302-af1f-4a1c-b0ad-09e0ea70d1ba" (UID: "94255302-af1f-4a1c-b0ad-09e0ea70d1ba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.655795 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-kube-api-access-hx8c9" (OuterVolumeSpecName: "kube-api-access-hx8c9") pod "94255302-af1f-4a1c-b0ad-09e0ea70d1ba" (UID: "94255302-af1f-4a1c-b0ad-09e0ea70d1ba"). InnerVolumeSpecName "kube-api-access-hx8c9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.659368 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "94255302-af1f-4a1c-b0ad-09e0ea70d1ba" (UID: "94255302-af1f-4a1c-b0ad-09e0ea70d1ba"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.659456 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb-kube-api-access-skjbv" (OuterVolumeSpecName: "kube-api-access-skjbv") pod "9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb" (UID: "9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb"). InnerVolumeSpecName "kube-api-access-skjbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.669873 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-scripts" (OuterVolumeSpecName: "scripts") pod "94255302-af1f-4a1c-b0ad-09e0ea70d1ba" (UID: "94255302-af1f-4a1c-b0ad-09e0ea70d1ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.727926 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb" (UID: "9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.735632 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94255302-af1f-4a1c-b0ad-09e0ea70d1ba" (UID: "94255302-af1f-4a1c-b0ad-09e0ea70d1ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.735960 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "94255302-af1f-4a1c-b0ad-09e0ea70d1ba" (UID: "94255302-af1f-4a1c-b0ad-09e0ea70d1ba"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.739789 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb" (UID: "9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.742923 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-config-data" (OuterVolumeSpecName: "config-data") pod "94255302-af1f-4a1c-b0ad-09e0ea70d1ba" (UID: "94255302-af1f-4a1c-b0ad-09e0ea70d1ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.753667 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.753693 4619 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.753704 4619 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.753712 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.753721 4619 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.753752 4619 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.753761 4619 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.753770 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skjbv\" (UniqueName: \"kubernetes.io/projected/9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb-kube-api-access-skjbv\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.753780 4619 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.753788 4619 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.753797 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx8c9\" (UniqueName: \"kubernetes.io/projected/94255302-af1f-4a1c-b0ad-09e0ea70d1ba-kube-api-access-hx8c9\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.761861 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb-config" (OuterVolumeSpecName: "config") pod "9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb" (UID: "9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.769635 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb" (UID: "9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.778728 4619 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.855675 4619 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.855718 4619 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.855730 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:09 crc kubenswrapper[4619]: I0126 11:13:09.939098 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.067573 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.067566 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94255302-af1f-4a1c-b0ad-09e0ea70d1ba","Type":"ContainerDied","Data":"deea4e2944fd14c6e0192ba440a12ee95ca99072292687455c4d5729d665b69b"} Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.067666 4619 scope.go:117] "RemoveContainer" containerID="10c71bef6b2ee4a00cf7a177c097074767015c39943c7ed0639d680089dde071" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.072472 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-j8vh8" event={"ID":"9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb","Type":"ContainerDied","Data":"f0c1b32ee6acc5f1149905b458b1988bbd742f8facc66cbb8452e0b2d46121df"} Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.072769 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-j8vh8" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.149634 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-j8vh8"] Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.171848 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-j8vh8"] Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.197225 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.208378 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.220681 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:13:10 crc kubenswrapper[4619]: E0126 11:13:10.221186 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb" containerName="dnsmasq-dns" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.221207 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb" containerName="dnsmasq-dns" Jan 26 11:13:10 crc kubenswrapper[4619]: E0126 11:13:10.221229 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb" containerName="init" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.221239 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb" containerName="init" Jan 26 11:13:10 crc kubenswrapper[4619]: E0126 11:13:10.221251 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94255302-af1f-4a1c-b0ad-09e0ea70d1ba" containerName="glance-httpd" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.221258 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="94255302-af1f-4a1c-b0ad-09e0ea70d1ba" containerName="glance-httpd" Jan 26 11:13:10 crc kubenswrapper[4619]: E0126 11:13:10.221278 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94255302-af1f-4a1c-b0ad-09e0ea70d1ba" containerName="glance-log" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.221284 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="94255302-af1f-4a1c-b0ad-09e0ea70d1ba" containerName="glance-log" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.221468 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="94255302-af1f-4a1c-b0ad-09e0ea70d1ba" containerName="glance-log" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.221487 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="94255302-af1f-4a1c-b0ad-09e0ea70d1ba" containerName="glance-httpd" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.221497 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb" containerName="dnsmasq-dns" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.222818 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.226526 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.226603 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.230368 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.268901 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6838e991-4c5f-4e90-8cd7-2c92dff641fe-logs\") pod \"glance-default-external-api-0\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " pod="openstack/glance-default-external-api-0" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.268944 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6838e991-4c5f-4e90-8cd7-2c92dff641fe-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " pod="openstack/glance-default-external-api-0" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.268977 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6838e991-4c5f-4e90-8cd7-2c92dff641fe-scripts\") pod \"glance-default-external-api-0\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " pod="openstack/glance-default-external-api-0" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.269045 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6838e991-4c5f-4e90-8cd7-2c92dff641fe-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " pod="openstack/glance-default-external-api-0" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.269080 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " pod="openstack/glance-default-external-api-0" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.269113 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6838e991-4c5f-4e90-8cd7-2c92dff641fe-config-data\") pod \"glance-default-external-api-0\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " pod="openstack/glance-default-external-api-0" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.269238 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6838e991-4c5f-4e90-8cd7-2c92dff641fe-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " pod="openstack/glance-default-external-api-0" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.269275 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7rx5\" (UniqueName: \"kubernetes.io/projected/6838e991-4c5f-4e90-8cd7-2c92dff641fe-kube-api-access-n7rx5\") pod \"glance-default-external-api-0\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " pod="openstack/glance-default-external-api-0" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.370918 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6838e991-4c5f-4e90-8cd7-2c92dff641fe-logs\") pod \"glance-default-external-api-0\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " pod="openstack/glance-default-external-api-0" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.371267 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6838e991-4c5f-4e90-8cd7-2c92dff641fe-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " pod="openstack/glance-default-external-api-0" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.371323 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6838e991-4c5f-4e90-8cd7-2c92dff641fe-scripts\") pod \"glance-default-external-api-0\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " pod="openstack/glance-default-external-api-0" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.371374 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6838e991-4c5f-4e90-8cd7-2c92dff641fe-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " pod="openstack/glance-default-external-api-0" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.371442 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " pod="openstack/glance-default-external-api-0" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.371449 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6838e991-4c5f-4e90-8cd7-2c92dff641fe-logs\") pod \"glance-default-external-api-0\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " pod="openstack/glance-default-external-api-0" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.371492 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6838e991-4c5f-4e90-8cd7-2c92dff641fe-config-data\") pod \"glance-default-external-api-0\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " pod="openstack/glance-default-external-api-0" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.371570 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6838e991-4c5f-4e90-8cd7-2c92dff641fe-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " pod="openstack/glance-default-external-api-0" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.371597 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7rx5\" (UniqueName: \"kubernetes.io/projected/6838e991-4c5f-4e90-8cd7-2c92dff641fe-kube-api-access-n7rx5\") pod \"glance-default-external-api-0\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " pod="openstack/glance-default-external-api-0" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.372976 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6838e991-4c5f-4e90-8cd7-2c92dff641fe-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " pod="openstack/glance-default-external-api-0" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.375928 4619 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.378459 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6838e991-4c5f-4e90-8cd7-2c92dff641fe-scripts\") pod \"glance-default-external-api-0\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " pod="openstack/glance-default-external-api-0" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.379388 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6838e991-4c5f-4e90-8cd7-2c92dff641fe-config-data\") pod \"glance-default-external-api-0\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " pod="openstack/glance-default-external-api-0" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.396872 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6838e991-4c5f-4e90-8cd7-2c92dff641fe-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " pod="openstack/glance-default-external-api-0" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.396954 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7rx5\" (UniqueName: \"kubernetes.io/projected/6838e991-4c5f-4e90-8cd7-2c92dff641fe-kube-api-access-n7rx5\") pod \"glance-default-external-api-0\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " pod="openstack/glance-default-external-api-0" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.403421 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6838e991-4c5f-4e90-8cd7-2c92dff641fe-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " pod="openstack/glance-default-external-api-0" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.411717 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " pod="openstack/glance-default-external-api-0" Jan 26 11:13:10 crc kubenswrapper[4619]: I0126 11:13:10.553304 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 11:13:11 crc kubenswrapper[4619]: I0126 11:13:11.273801 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94255302-af1f-4a1c-b0ad-09e0ea70d1ba" path="/var/lib/kubelet/pods/94255302-af1f-4a1c-b0ad-09e0ea70d1ba/volumes" Jan 26 11:13:11 crc kubenswrapper[4619]: I0126 11:13:11.274651 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb" path="/var/lib/kubelet/pods/9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb/volumes" Jan 26 11:13:11 crc kubenswrapper[4619]: I0126 11:13:11.569681 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-j8vh8" podUID="9d70b210-15fc-4ea1-9ea8-b4e0a4145cfb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: i/o timeout" Jan 26 11:13:21 crc kubenswrapper[4619]: E0126 11:13:21.006008 4619 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 26 11:13:21 crc kubenswrapper[4619]: E0126 11:13:21.006596 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n577h645h57ch648h5bbhcbh5f6h565h5b9h654hbbh68bh59h5d8hdbh566h5fh87h58h6dh687h655h5c9hdbh58ch665h689hfch58h7dh596h685q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjvwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(a59562e9-8459-4c22-a737-f6bde480fc2b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.100224 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58f6c6c467-v5nzg" Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.104683 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c6cbcb8f7-2lvnl" Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.159764 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c6cbcb8f7-2lvnl" event={"ID":"985466ba-fea8-4557-ae46-94d1cc13fee1","Type":"ContainerDied","Data":"746dbfff98ad12add6d6b61b6c039ca48453a537c6058f64d6d339c5df5f880b"} Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.159780 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c6cbcb8f7-2lvnl" Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.162647 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58f6c6c467-v5nzg" event={"ID":"4ef5cd80-2bd8-445c-bb69-79c53f8c888d","Type":"ContainerDied","Data":"9d5ec6df7b826faf0b7c070a7e117b9e14c0a0e9bcd73fdc987f6ddcc38963ab"} Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.162711 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58f6c6c467-v5nzg" Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.297302 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ef5cd80-2bd8-445c-bb69-79c53f8c888d-scripts\") pod \"4ef5cd80-2bd8-445c-bb69-79c53f8c888d\" (UID: \"4ef5cd80-2bd8-445c-bb69-79c53f8c888d\") " Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.297403 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/985466ba-fea8-4557-ae46-94d1cc13fee1-horizon-secret-key\") pod \"985466ba-fea8-4557-ae46-94d1cc13fee1\" (UID: \"985466ba-fea8-4557-ae46-94d1cc13fee1\") " Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.297445 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ef5cd80-2bd8-445c-bb69-79c53f8c888d-config-data\") pod \"4ef5cd80-2bd8-445c-bb69-79c53f8c888d\" (UID: \"4ef5cd80-2bd8-445c-bb69-79c53f8c888d\") " Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.297499 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4ef5cd80-2bd8-445c-bb69-79c53f8c888d-horizon-secret-key\") pod \"4ef5cd80-2bd8-445c-bb69-79c53f8c888d\" (UID: \"4ef5cd80-2bd8-445c-bb69-79c53f8c888d\") " Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.297523 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsxlt\" (UniqueName: \"kubernetes.io/projected/4ef5cd80-2bd8-445c-bb69-79c53f8c888d-kube-api-access-hsxlt\") pod \"4ef5cd80-2bd8-445c-bb69-79c53f8c888d\" (UID: \"4ef5cd80-2bd8-445c-bb69-79c53f8c888d\") " Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.297544 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/985466ba-fea8-4557-ae46-94d1cc13fee1-logs\") pod \"985466ba-fea8-4557-ae46-94d1cc13fee1\" (UID: \"985466ba-fea8-4557-ae46-94d1cc13fee1\") " Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.297579 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hrh2\" (UniqueName: \"kubernetes.io/projected/985466ba-fea8-4557-ae46-94d1cc13fee1-kube-api-access-9hrh2\") pod \"985466ba-fea8-4557-ae46-94d1cc13fee1\" (UID: \"985466ba-fea8-4557-ae46-94d1cc13fee1\") " Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.297677 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ef5cd80-2bd8-445c-bb69-79c53f8c888d-logs\") pod \"4ef5cd80-2bd8-445c-bb69-79c53f8c888d\" (UID: \"4ef5cd80-2bd8-445c-bb69-79c53f8c888d\") " Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.297713 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/985466ba-fea8-4557-ae46-94d1cc13fee1-scripts\") pod \"985466ba-fea8-4557-ae46-94d1cc13fee1\" (UID: \"985466ba-fea8-4557-ae46-94d1cc13fee1\") " Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.298813 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ef5cd80-2bd8-445c-bb69-79c53f8c888d-logs" (OuterVolumeSpecName: "logs") pod "4ef5cd80-2bd8-445c-bb69-79c53f8c888d" (UID: "4ef5cd80-2bd8-445c-bb69-79c53f8c888d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.299137 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/985466ba-fea8-4557-ae46-94d1cc13fee1-config-data\") pod \"985466ba-fea8-4557-ae46-94d1cc13fee1\" (UID: \"985466ba-fea8-4557-ae46-94d1cc13fee1\") " Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.299156 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ef5cd80-2bd8-445c-bb69-79c53f8c888d-scripts" (OuterVolumeSpecName: "scripts") pod "4ef5cd80-2bd8-445c-bb69-79c53f8c888d" (UID: "4ef5cd80-2bd8-445c-bb69-79c53f8c888d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.299310 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/985466ba-fea8-4557-ae46-94d1cc13fee1-logs" (OuterVolumeSpecName: "logs") pod "985466ba-fea8-4557-ae46-94d1cc13fee1" (UID: "985466ba-fea8-4557-ae46-94d1cc13fee1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.299718 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/985466ba-fea8-4557-ae46-94d1cc13fee1-scripts" (OuterVolumeSpecName: "scripts") pod "985466ba-fea8-4557-ae46-94d1cc13fee1" (UID: "985466ba-fea8-4557-ae46-94d1cc13fee1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.299739 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ef5cd80-2bd8-445c-bb69-79c53f8c888d-config-data" (OuterVolumeSpecName: "config-data") pod "4ef5cd80-2bd8-445c-bb69-79c53f8c888d" (UID: "4ef5cd80-2bd8-445c-bb69-79c53f8c888d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.300107 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ef5cd80-2bd8-445c-bb69-79c53f8c888d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.300136 4619 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/985466ba-fea8-4557-ae46-94d1cc13fee1-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.300148 4619 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ef5cd80-2bd8-445c-bb69-79c53f8c888d-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.300158 4619 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/985466ba-fea8-4557-ae46-94d1cc13fee1-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.300168 4619 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ef5cd80-2bd8-445c-bb69-79c53f8c888d-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.300171 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/985466ba-fea8-4557-ae46-94d1cc13fee1-config-data" (OuterVolumeSpecName: "config-data") pod "985466ba-fea8-4557-ae46-94d1cc13fee1" (UID: "985466ba-fea8-4557-ae46-94d1cc13fee1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.304545 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/985466ba-fea8-4557-ae46-94d1cc13fee1-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "985466ba-fea8-4557-ae46-94d1cc13fee1" (UID: "985466ba-fea8-4557-ae46-94d1cc13fee1"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.304651 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/985466ba-fea8-4557-ae46-94d1cc13fee1-kube-api-access-9hrh2" (OuterVolumeSpecName: "kube-api-access-9hrh2") pod "985466ba-fea8-4557-ae46-94d1cc13fee1" (UID: "985466ba-fea8-4557-ae46-94d1cc13fee1"). InnerVolumeSpecName "kube-api-access-9hrh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.305125 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ef5cd80-2bd8-445c-bb69-79c53f8c888d-kube-api-access-hsxlt" (OuterVolumeSpecName: "kube-api-access-hsxlt") pod "4ef5cd80-2bd8-445c-bb69-79c53f8c888d" (UID: "4ef5cd80-2bd8-445c-bb69-79c53f8c888d"). InnerVolumeSpecName "kube-api-access-hsxlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.305333 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ef5cd80-2bd8-445c-bb69-79c53f8c888d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "4ef5cd80-2bd8-445c-bb69-79c53f8c888d" (UID: "4ef5cd80-2bd8-445c-bb69-79c53f8c888d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.402210 4619 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/985466ba-fea8-4557-ae46-94d1cc13fee1-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.402238 4619 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4ef5cd80-2bd8-445c-bb69-79c53f8c888d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.402248 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsxlt\" (UniqueName: \"kubernetes.io/projected/4ef5cd80-2bd8-445c-bb69-79c53f8c888d-kube-api-access-hsxlt\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.402258 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hrh2\" (UniqueName: \"kubernetes.io/projected/985466ba-fea8-4557-ae46-94d1cc13fee1-kube-api-access-9hrh2\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.402267 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/985466ba-fea8-4557-ae46-94d1cc13fee1-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.534670 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6c6cbcb8f7-2lvnl"] Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.542664 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6c6cbcb8f7-2lvnl"] Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.564412 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-58f6c6c467-v5nzg"] Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.573774 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-58f6c6c467-v5nzg"] Jan 26 11:13:21 crc kubenswrapper[4619]: E0126 11:13:21.881954 4619 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 26 11:13:21 crc kubenswrapper[4619]: E0126 11:13:21.882169 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bm4zc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-vmzjm_openstack(4fe185f3-c64d-47a7-9c93-f40ef8d24d9e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:13:21 crc kubenswrapper[4619]: E0126 11:13:21.887015 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-vmzjm" podUID="4fe185f3-c64d-47a7-9c93-f40ef8d24d9e" Jan 26 11:13:21 crc kubenswrapper[4619]: I0126 11:13:21.963100 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77656968b5-6fspz" Jan 26 11:13:22 crc kubenswrapper[4619]: I0126 11:13:22.113578 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65f96a5d-423f-4959-8115-333b47907fd7-config-data\") pod \"65f96a5d-423f-4959-8115-333b47907fd7\" (UID: \"65f96a5d-423f-4959-8115-333b47907fd7\") " Jan 26 11:13:22 crc kubenswrapper[4619]: I0126 11:13:22.113727 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65f96a5d-423f-4959-8115-333b47907fd7-scripts\") pod \"65f96a5d-423f-4959-8115-333b47907fd7\" (UID: \"65f96a5d-423f-4959-8115-333b47907fd7\") " Jan 26 11:13:22 crc kubenswrapper[4619]: I0126 11:13:22.113834 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/65f96a5d-423f-4959-8115-333b47907fd7-horizon-secret-key\") pod \"65f96a5d-423f-4959-8115-333b47907fd7\" (UID: \"65f96a5d-423f-4959-8115-333b47907fd7\") " Jan 26 11:13:22 crc kubenswrapper[4619]: I0126 11:13:22.113860 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n2h9\" (UniqueName: \"kubernetes.io/projected/65f96a5d-423f-4959-8115-333b47907fd7-kube-api-access-9n2h9\") pod \"65f96a5d-423f-4959-8115-333b47907fd7\" (UID: \"65f96a5d-423f-4959-8115-333b47907fd7\") " Jan 26 11:13:22 crc kubenswrapper[4619]: I0126 11:13:22.114165 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65f96a5d-423f-4959-8115-333b47907fd7-logs\") pod \"65f96a5d-423f-4959-8115-333b47907fd7\" (UID: \"65f96a5d-423f-4959-8115-333b47907fd7\") " Jan 26 11:13:22 crc kubenswrapper[4619]: I0126 11:13:22.114197 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65f96a5d-423f-4959-8115-333b47907fd7-config-data" (OuterVolumeSpecName: "config-data") pod "65f96a5d-423f-4959-8115-333b47907fd7" (UID: "65f96a5d-423f-4959-8115-333b47907fd7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:13:22 crc kubenswrapper[4619]: I0126 11:13:22.114495 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65f96a5d-423f-4959-8115-333b47907fd7-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:22 crc kubenswrapper[4619]: I0126 11:13:22.114972 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65f96a5d-423f-4959-8115-333b47907fd7-scripts" (OuterVolumeSpecName: "scripts") pod "65f96a5d-423f-4959-8115-333b47907fd7" (UID: "65f96a5d-423f-4959-8115-333b47907fd7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:13:22 crc kubenswrapper[4619]: I0126 11:13:22.115431 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65f96a5d-423f-4959-8115-333b47907fd7-logs" (OuterVolumeSpecName: "logs") pod "65f96a5d-423f-4959-8115-333b47907fd7" (UID: "65f96a5d-423f-4959-8115-333b47907fd7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:13:22 crc kubenswrapper[4619]: I0126 11:13:22.117293 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65f96a5d-423f-4959-8115-333b47907fd7-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "65f96a5d-423f-4959-8115-333b47907fd7" (UID: "65f96a5d-423f-4959-8115-333b47907fd7"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:13:22 crc kubenswrapper[4619]: I0126 11:13:22.122255 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65f96a5d-423f-4959-8115-333b47907fd7-kube-api-access-9n2h9" (OuterVolumeSpecName: "kube-api-access-9n2h9") pod "65f96a5d-423f-4959-8115-333b47907fd7" (UID: "65f96a5d-423f-4959-8115-333b47907fd7"). InnerVolumeSpecName "kube-api-access-9n2h9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:13:22 crc kubenswrapper[4619]: I0126 11:13:22.171549 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77656968b5-6fspz" event={"ID":"65f96a5d-423f-4959-8115-333b47907fd7","Type":"ContainerDied","Data":"3287ccaa0c3d090dc56427349a3501b5fa2407b55e845ccf4f041a24a2446025"} Jan 26 11:13:22 crc kubenswrapper[4619]: I0126 11:13:22.171651 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77656968b5-6fspz" Jan 26 11:13:22 crc kubenswrapper[4619]: I0126 11:13:22.175834 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1557a03a-428e-4a8c-8ddb-634add71a69f","Type":"ContainerStarted","Data":"6d40629ba966c74b319e03e6f742e05c43700ea4f6543eb7849dafa00294d3c4"} Jan 26 11:13:22 crc kubenswrapper[4619]: E0126 11:13:22.178960 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-vmzjm" podUID="4fe185f3-c64d-47a7-9c93-f40ef8d24d9e" Jan 26 11:13:22 crc kubenswrapper[4619]: I0126 11:13:22.220282 4619 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65f96a5d-423f-4959-8115-333b47907fd7-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:22 crc kubenswrapper[4619]: I0126 11:13:22.220318 4619 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/65f96a5d-423f-4959-8115-333b47907fd7-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:22 crc kubenswrapper[4619]: I0126 11:13:22.220330 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n2h9\" (UniqueName: \"kubernetes.io/projected/65f96a5d-423f-4959-8115-333b47907fd7-kube-api-access-9n2h9\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:22 crc kubenswrapper[4619]: I0126 11:13:22.220339 4619 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65f96a5d-423f-4959-8115-333b47907fd7-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:22 crc kubenswrapper[4619]: I0126 11:13:22.271901 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-77656968b5-6fspz"] Jan 26 11:13:22 crc kubenswrapper[4619]: I0126 11:13:22.279496 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-77656968b5-6fspz"] Jan 26 11:13:23 crc kubenswrapper[4619]: I0126 11:13:23.271037 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ef5cd80-2bd8-445c-bb69-79c53f8c888d" path="/var/lib/kubelet/pods/4ef5cd80-2bd8-445c-bb69-79c53f8c888d/volumes" Jan 26 11:13:23 crc kubenswrapper[4619]: I0126 11:13:23.271480 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65f96a5d-423f-4959-8115-333b47907fd7" path="/var/lib/kubelet/pods/65f96a5d-423f-4959-8115-333b47907fd7/volumes" Jan 26 11:13:23 crc kubenswrapper[4619]: I0126 11:13:23.271905 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="985466ba-fea8-4557-ae46-94d1cc13fee1" path="/var/lib/kubelet/pods/985466ba-fea8-4557-ae46-94d1cc13fee1/volumes" Jan 26 11:13:24 crc kubenswrapper[4619]: I0126 11:13:24.192317 4619 generic.go:334] "Generic (PLEG): container finished" podID="76fffa56-d701-41ca-8a74-4c72015701f4" containerID="0f60d824264b73a6cd33dcdee653553331bc81616e47b21c9610ac8f258d23b4" exitCode=0 Jan 26 11:13:24 crc kubenswrapper[4619]: I0126 11:13:24.192363 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-64h92" event={"ID":"76fffa56-d701-41ca-8a74-4c72015701f4","Type":"ContainerDied","Data":"0f60d824264b73a6cd33dcdee653553331bc81616e47b21c9610ac8f258d23b4"} Jan 26 11:13:24 crc kubenswrapper[4619]: I0126 11:13:24.302625 4619 scope.go:117] "RemoveContainer" containerID="0f292513a86f05552ff13ebb5ba604c9ed03605424e7c674d93a4e2082792118" Jan 26 11:13:24 crc kubenswrapper[4619]: E0126 11:13:24.359812 4619 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 26 11:13:24 crc kubenswrapper[4619]: E0126 11:13:24.360038 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqt8q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-7zn9h_openstack(42f56f30-76de-408b-bbe1-8ef2b764f26b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:13:24 crc kubenswrapper[4619]: E0126 11:13:24.361694 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-7zn9h" podUID="42f56f30-76de-408b-bbe1-8ef2b764f26b" Jan 26 11:13:24 crc kubenswrapper[4619]: I0126 11:13:24.777308 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-846d64d6c4-66jvl"] Jan 26 11:13:24 crc kubenswrapper[4619]: W0126 11:13:24.821525 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10c8ed10_dab5_49e5_a030_4be99c720ae0.slice/crio-c02d80ff75d2024250e10b4070822b4d43ba8de3d3682a211e13fb6413c478c1 WatchSource:0}: Error finding container c02d80ff75d2024250e10b4070822b4d43ba8de3d3682a211e13fb6413c478c1: Status 404 returned error can't find the container with id c02d80ff75d2024250e10b4070822b4d43ba8de3d3682a211e13fb6413c478c1 Jan 26 11:13:24 crc kubenswrapper[4619]: I0126 11:13:24.863262 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f67c775d4-7ls4r"] Jan 26 11:13:24 crc kubenswrapper[4619]: I0126 11:13:24.879303 4619 scope.go:117] "RemoveContainer" containerID="1262c33e48ef3ad40e73b2b2b91e37af1a0d765d631b0d5876d287d2c8b278c5" Jan 26 11:13:24 crc kubenswrapper[4619]: W0126 11:13:24.890227 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod670c0ff7_8d41_4dc2_81d7_b64d24b11d3d.slice/crio-171560876c1d8f44e444fd18c5acf7dfab2a0b5de3201237ebc1eab5f610b708 WatchSource:0}: Error finding container 171560876c1d8f44e444fd18c5acf7dfab2a0b5de3201237ebc1eab5f610b708: Status 404 returned error can't find the container with id 171560876c1d8f44e444fd18c5acf7dfab2a0b5de3201237ebc1eab5f610b708 Jan 26 11:13:24 crc kubenswrapper[4619]: I0126 11:13:24.913683 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ltrbg"] Jan 26 11:13:24 crc kubenswrapper[4619]: I0126 11:13:24.923843 4619 scope.go:117] "RemoveContainer" containerID="daeebf884dff31dff7ef37d24c5e0687696c867b6ab7690fa7d1ef93db7c5731" Jan 26 11:13:24 crc kubenswrapper[4619]: I0126 11:13:24.985216 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 11:13:25 crc kubenswrapper[4619]: I0126 11:13:25.203845 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ltrbg" event={"ID":"3628aeb0-1e2e-4275-a914-31e18f47a989","Type":"ContainerStarted","Data":"66d8942d6a7d67cb2d492d42166ca9c5586f79270adc786707236dbd76523d58"} Jan 26 11:13:25 crc kubenswrapper[4619]: I0126 11:13:25.207538 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f67c775d4-7ls4r" event={"ID":"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d","Type":"ContainerStarted","Data":"171560876c1d8f44e444fd18c5acf7dfab2a0b5de3201237ebc1eab5f610b708"} Jan 26 11:13:25 crc kubenswrapper[4619]: I0126 11:13:25.219851 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-846d64d6c4-66jvl" event={"ID":"10c8ed10-dab5-49e5-a030-4be99c720ae0","Type":"ContainerStarted","Data":"c02d80ff75d2024250e10b4070822b4d43ba8de3d3682a211e13fb6413c478c1"} Jan 26 11:13:25 crc kubenswrapper[4619]: E0126 11:13:25.230289 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-7zn9h" podUID="42f56f30-76de-408b-bbe1-8ef2b764f26b" Jan 26 11:13:25 crc kubenswrapper[4619]: I0126 11:13:25.386310 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:13:25 crc kubenswrapper[4619]: I0126 11:13:25.720742 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-64h92" Jan 26 11:13:25 crc kubenswrapper[4619]: I0126 11:13:25.792640 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-788zc\" (UniqueName: \"kubernetes.io/projected/76fffa56-d701-41ca-8a74-4c72015701f4-kube-api-access-788zc\") pod \"76fffa56-d701-41ca-8a74-4c72015701f4\" (UID: \"76fffa56-d701-41ca-8a74-4c72015701f4\") " Jan 26 11:13:25 crc kubenswrapper[4619]: I0126 11:13:25.792703 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76fffa56-d701-41ca-8a74-4c72015701f4-combined-ca-bundle\") pod \"76fffa56-d701-41ca-8a74-4c72015701f4\" (UID: \"76fffa56-d701-41ca-8a74-4c72015701f4\") " Jan 26 11:13:25 crc kubenswrapper[4619]: I0126 11:13:25.792761 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/76fffa56-d701-41ca-8a74-4c72015701f4-config\") pod \"76fffa56-d701-41ca-8a74-4c72015701f4\" (UID: \"76fffa56-d701-41ca-8a74-4c72015701f4\") " Jan 26 11:13:25 crc kubenswrapper[4619]: I0126 11:13:25.802745 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76fffa56-d701-41ca-8a74-4c72015701f4-kube-api-access-788zc" (OuterVolumeSpecName: "kube-api-access-788zc") pod "76fffa56-d701-41ca-8a74-4c72015701f4" (UID: "76fffa56-d701-41ca-8a74-4c72015701f4"). InnerVolumeSpecName "kube-api-access-788zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:13:25 crc kubenswrapper[4619]: I0126 11:13:25.831122 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76fffa56-d701-41ca-8a74-4c72015701f4-config" (OuterVolumeSpecName: "config") pod "76fffa56-d701-41ca-8a74-4c72015701f4" (UID: "76fffa56-d701-41ca-8a74-4c72015701f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:13:25 crc kubenswrapper[4619]: I0126 11:13:25.833112 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76fffa56-d701-41ca-8a74-4c72015701f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76fffa56-d701-41ca-8a74-4c72015701f4" (UID: "76fffa56-d701-41ca-8a74-4c72015701f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:13:25 crc kubenswrapper[4619]: I0126 11:13:25.894879 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-788zc\" (UniqueName: \"kubernetes.io/projected/76fffa56-d701-41ca-8a74-4c72015701f4-kube-api-access-788zc\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:25 crc kubenswrapper[4619]: I0126 11:13:25.894916 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76fffa56-d701-41ca-8a74-4c72015701f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:25 crc kubenswrapper[4619]: I0126 11:13:25.894927 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/76fffa56-d701-41ca-8a74-4c72015701f4-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.256495 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-f7nh4" event={"ID":"9d5b79a0-cb51-483b-96b0-8f85b385692c","Type":"ContainerStarted","Data":"9dc902fa13ed66ab7c2dcd87b7b6ea31f0c59da47277b4b5f9c421710efa76de"} Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.260012 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1557a03a-428e-4a8c-8ddb-634add71a69f","Type":"ContainerStarted","Data":"1b0f40112063bbf62162a3ce7229cb31e0cc42c4e244dc0b3c762d4ad1b5cb9a"} Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.263996 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6838e991-4c5f-4e90-8cd7-2c92dff641fe","Type":"ContainerStarted","Data":"2087dbd274c5e46fa8efc93bbb6bce0f418053bf84f1a79f9f06ee34b486af9b"} Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.264031 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6838e991-4c5f-4e90-8cd7-2c92dff641fe","Type":"ContainerStarted","Data":"6ed494a56105879bb0c98c9bb1dc69f2e132d6a3178fbc325b37a300b665b7e2"} Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.276868 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-f7nh4" podStartSLOduration=5.329247291 podStartE2EDuration="42.276849874s" podCreationTimestamp="2026-01-26 11:12:44 +0000 UTC" firstStartedPulling="2026-01-26 11:12:47.345335332 +0000 UTC m=+1066.379376048" lastFinishedPulling="2026-01-26 11:13:24.292937915 +0000 UTC m=+1103.326978631" observedRunningTime="2026-01-26 11:13:26.276401862 +0000 UTC m=+1105.310442578" watchObservedRunningTime="2026-01-26 11:13:26.276849874 +0000 UTC m=+1105.310890590" Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.278578 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-846d64d6c4-66jvl" event={"ID":"10c8ed10-dab5-49e5-a030-4be99c720ae0","Type":"ContainerStarted","Data":"d27a7212d9be76fc53e45b9d6ccefc04025f9b2c7a9b45834a4e8810c17eaca8"} Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.278634 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-846d64d6c4-66jvl" event={"ID":"10c8ed10-dab5-49e5-a030-4be99c720ae0","Type":"ContainerStarted","Data":"d7dee151b913c81c289549ef88de641f688ee6369ad4e8de643bb64edc4bba79"} Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.293606 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-64h92" event={"ID":"76fffa56-d701-41ca-8a74-4c72015701f4","Type":"ContainerDied","Data":"20a138c6edb89962e7c5b0a31f95c01ed773a0676b63908d4e1b314d61c8538d"} Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.293670 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20a138c6edb89962e7c5b0a31f95c01ed773a0676b63908d4e1b314d61c8538d" Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.293727 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-64h92" Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.307429 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-846d64d6c4-66jvl" podStartSLOduration=31.749417664 podStartE2EDuration="32.307413371s" podCreationTimestamp="2026-01-26 11:12:54 +0000 UTC" firstStartedPulling="2026-01-26 11:13:24.879260358 +0000 UTC m=+1103.913301075" lastFinishedPulling="2026-01-26 11:13:25.437256066 +0000 UTC m=+1104.471296782" observedRunningTime="2026-01-26 11:13:26.30665811 +0000 UTC m=+1105.340698826" watchObservedRunningTime="2026-01-26 11:13:26.307413371 +0000 UTC m=+1105.341454087" Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.317205 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ltrbg" event={"ID":"3628aeb0-1e2e-4275-a914-31e18f47a989","Type":"ContainerStarted","Data":"8ace1614ebcd4e162c21785ba58440191b5b058d2f8d798ed1d440fddc61c40a"} Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.324803 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f67c775d4-7ls4r" event={"ID":"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d","Type":"ContainerStarted","Data":"b0e2cbc3edeffa1b4639dbc8cde1e089d22e52bccc9a66bfa5d58fe57443d2fd"} Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.324860 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f67c775d4-7ls4r" event={"ID":"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d","Type":"ContainerStarted","Data":"ada190b479ee52f8303b817e9c1c2701293e633d99dd5836167d714d09c747ba"} Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.365059 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a59562e9-8459-4c22-a737-f6bde480fc2b","Type":"ContainerStarted","Data":"7e3704a57885271c7faeef3d26990e8c17acae48718889e993d22d753a805304"} Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.397090 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-ltrbg" podStartSLOduration=27.397069655 podStartE2EDuration="27.397069655s" podCreationTimestamp="2026-01-26 11:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:13:26.386829531 +0000 UTC m=+1105.420870247" watchObservedRunningTime="2026-01-26 11:13:26.397069655 +0000 UTC m=+1105.431110371" Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.522384 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6f67c775d4-7ls4r" podStartSLOduration=32.009901219 podStartE2EDuration="32.522252662s" podCreationTimestamp="2026-01-26 11:12:54 +0000 UTC" firstStartedPulling="2026-01-26 11:13:24.923405321 +0000 UTC m=+1103.957446037" lastFinishedPulling="2026-01-26 11:13:25.435756764 +0000 UTC m=+1104.469797480" observedRunningTime="2026-01-26 11:13:26.506974549 +0000 UTC m=+1105.541015265" watchObservedRunningTime="2026-01-26 11:13:26.522252662 +0000 UTC m=+1105.556293378" Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.729658 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-dqf8s"] Jan 26 11:13:26 crc kubenswrapper[4619]: E0126 11:13:26.729999 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76fffa56-d701-41ca-8a74-4c72015701f4" containerName="neutron-db-sync" Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.730011 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="76fffa56-d701-41ca-8a74-4c72015701f4" containerName="neutron-db-sync" Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.730296 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="76fffa56-d701-41ca-8a74-4c72015701f4" containerName="neutron-db-sync" Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.731164 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.764956 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-dqf8s"] Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.819130 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5d7d811-380d-4e16-b6a6-03e240671a70-dns-svc\") pod \"dnsmasq-dns-55f844cf75-dqf8s\" (UID: \"e5d7d811-380d-4e16-b6a6-03e240671a70\") " pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.819198 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d7d811-380d-4e16-b6a6-03e240671a70-config\") pod \"dnsmasq-dns-55f844cf75-dqf8s\" (UID: \"e5d7d811-380d-4e16-b6a6-03e240671a70\") " pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.819235 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5d7d811-380d-4e16-b6a6-03e240671a70-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-dqf8s\" (UID: \"e5d7d811-380d-4e16-b6a6-03e240671a70\") " pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.819331 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgkmf\" (UniqueName: \"kubernetes.io/projected/e5d7d811-380d-4e16-b6a6-03e240671a70-kube-api-access-wgkmf\") pod \"dnsmasq-dns-55f844cf75-dqf8s\" (UID: \"e5d7d811-380d-4e16-b6a6-03e240671a70\") " pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.819550 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5d7d811-380d-4e16-b6a6-03e240671a70-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-dqf8s\" (UID: \"e5d7d811-380d-4e16-b6a6-03e240671a70\") " pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.819573 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5d7d811-380d-4e16-b6a6-03e240671a70-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-dqf8s\" (UID: \"e5d7d811-380d-4e16-b6a6-03e240671a70\") " pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.928645 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgkmf\" (UniqueName: \"kubernetes.io/projected/e5d7d811-380d-4e16-b6a6-03e240671a70-kube-api-access-wgkmf\") pod \"dnsmasq-dns-55f844cf75-dqf8s\" (UID: \"e5d7d811-380d-4e16-b6a6-03e240671a70\") " pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.929492 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5d7d811-380d-4e16-b6a6-03e240671a70-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-dqf8s\" (UID: \"e5d7d811-380d-4e16-b6a6-03e240671a70\") " pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.931513 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5d7d811-380d-4e16-b6a6-03e240671a70-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-dqf8s\" (UID: \"e5d7d811-380d-4e16-b6a6-03e240671a70\") " pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.931745 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5d7d811-380d-4e16-b6a6-03e240671a70-dns-svc\") pod \"dnsmasq-dns-55f844cf75-dqf8s\" (UID: \"e5d7d811-380d-4e16-b6a6-03e240671a70\") " pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.931899 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d7d811-380d-4e16-b6a6-03e240671a70-config\") pod \"dnsmasq-dns-55f844cf75-dqf8s\" (UID: \"e5d7d811-380d-4e16-b6a6-03e240671a70\") " pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.932065 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5d7d811-380d-4e16-b6a6-03e240671a70-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-dqf8s\" (UID: \"e5d7d811-380d-4e16-b6a6-03e240671a70\") " pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.933102 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5d7d811-380d-4e16-b6a6-03e240671a70-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-dqf8s\" (UID: \"e5d7d811-380d-4e16-b6a6-03e240671a70\") " pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.933732 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5d7d811-380d-4e16-b6a6-03e240671a70-dns-svc\") pod \"dnsmasq-dns-55f844cf75-dqf8s\" (UID: \"e5d7d811-380d-4e16-b6a6-03e240671a70\") " pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.938826 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5d7d811-380d-4e16-b6a6-03e240671a70-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-dqf8s\" (UID: \"e5d7d811-380d-4e16-b6a6-03e240671a70\") " pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.939767 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5d7d811-380d-4e16-b6a6-03e240671a70-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-dqf8s\" (UID: \"e5d7d811-380d-4e16-b6a6-03e240671a70\") " pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.940589 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d7d811-380d-4e16-b6a6-03e240671a70-config\") pod \"dnsmasq-dns-55f844cf75-dqf8s\" (UID: \"e5d7d811-380d-4e16-b6a6-03e240671a70\") " pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" Jan 26 11:13:26 crc kubenswrapper[4619]: I0126 11:13:26.975050 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgkmf\" (UniqueName: \"kubernetes.io/projected/e5d7d811-380d-4e16-b6a6-03e240671a70-kube-api-access-wgkmf\") pod \"dnsmasq-dns-55f844cf75-dqf8s\" (UID: \"e5d7d811-380d-4e16-b6a6-03e240671a70\") " pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" Jan 26 11:13:27 crc kubenswrapper[4619]: I0126 11:13:27.007081 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-66768f896-nrg7c"] Jan 26 11:13:27 crc kubenswrapper[4619]: I0126 11:13:27.008470 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66768f896-nrg7c" Jan 26 11:13:27 crc kubenswrapper[4619]: I0126 11:13:27.032363 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-66pxq" Jan 26 11:13:27 crc kubenswrapper[4619]: I0126 11:13:27.032660 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 26 11:13:27 crc kubenswrapper[4619]: I0126 11:13:27.032764 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 26 11:13:27 crc kubenswrapper[4619]: I0126 11:13:27.032860 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 26 11:13:27 crc kubenswrapper[4619]: I0126 11:13:27.035988 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fc689870-cc3b-4d35-968a-78b787569209-httpd-config\") pod \"neutron-66768f896-nrg7c\" (UID: \"fc689870-cc3b-4d35-968a-78b787569209\") " pod="openstack/neutron-66768f896-nrg7c" Jan 26 11:13:27 crc kubenswrapper[4619]: I0126 11:13:27.036068 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc689870-cc3b-4d35-968a-78b787569209-config\") pod \"neutron-66768f896-nrg7c\" (UID: \"fc689870-cc3b-4d35-968a-78b787569209\") " pod="openstack/neutron-66768f896-nrg7c" Jan 26 11:13:27 crc kubenswrapper[4619]: I0126 11:13:27.036095 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc689870-cc3b-4d35-968a-78b787569209-combined-ca-bundle\") pod \"neutron-66768f896-nrg7c\" (UID: \"fc689870-cc3b-4d35-968a-78b787569209\") " pod="openstack/neutron-66768f896-nrg7c" Jan 26 11:13:27 crc kubenswrapper[4619]: I0126 11:13:27.036158 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz2dz\" (UniqueName: \"kubernetes.io/projected/fc689870-cc3b-4d35-968a-78b787569209-kube-api-access-zz2dz\") pod \"neutron-66768f896-nrg7c\" (UID: \"fc689870-cc3b-4d35-968a-78b787569209\") " pod="openstack/neutron-66768f896-nrg7c" Jan 26 11:13:27 crc kubenswrapper[4619]: I0126 11:13:27.036183 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc689870-cc3b-4d35-968a-78b787569209-ovndb-tls-certs\") pod \"neutron-66768f896-nrg7c\" (UID: \"fc689870-cc3b-4d35-968a-78b787569209\") " pod="openstack/neutron-66768f896-nrg7c" Jan 26 11:13:27 crc kubenswrapper[4619]: I0126 11:13:27.073019 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66768f896-nrg7c"] Jan 26 11:13:27 crc kubenswrapper[4619]: I0126 11:13:27.081138 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" Jan 26 11:13:27 crc kubenswrapper[4619]: I0126 11:13:27.139706 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fc689870-cc3b-4d35-968a-78b787569209-httpd-config\") pod \"neutron-66768f896-nrg7c\" (UID: \"fc689870-cc3b-4d35-968a-78b787569209\") " pod="openstack/neutron-66768f896-nrg7c" Jan 26 11:13:27 crc kubenswrapper[4619]: I0126 11:13:27.139796 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc689870-cc3b-4d35-968a-78b787569209-config\") pod \"neutron-66768f896-nrg7c\" (UID: \"fc689870-cc3b-4d35-968a-78b787569209\") " pod="openstack/neutron-66768f896-nrg7c" Jan 26 11:13:27 crc kubenswrapper[4619]: I0126 11:13:27.139834 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc689870-cc3b-4d35-968a-78b787569209-combined-ca-bundle\") pod \"neutron-66768f896-nrg7c\" (UID: \"fc689870-cc3b-4d35-968a-78b787569209\") " pod="openstack/neutron-66768f896-nrg7c" Jan 26 11:13:27 crc kubenswrapper[4619]: I0126 11:13:27.139886 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz2dz\" (UniqueName: \"kubernetes.io/projected/fc689870-cc3b-4d35-968a-78b787569209-kube-api-access-zz2dz\") pod \"neutron-66768f896-nrg7c\" (UID: \"fc689870-cc3b-4d35-968a-78b787569209\") " pod="openstack/neutron-66768f896-nrg7c" Jan 26 11:13:27 crc kubenswrapper[4619]: I0126 11:13:27.139910 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc689870-cc3b-4d35-968a-78b787569209-ovndb-tls-certs\") pod \"neutron-66768f896-nrg7c\" (UID: \"fc689870-cc3b-4d35-968a-78b787569209\") " pod="openstack/neutron-66768f896-nrg7c" Jan 26 11:13:27 crc kubenswrapper[4619]: I0126 11:13:27.146471 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc689870-cc3b-4d35-968a-78b787569209-combined-ca-bundle\") pod \"neutron-66768f896-nrg7c\" (UID: \"fc689870-cc3b-4d35-968a-78b787569209\") " pod="openstack/neutron-66768f896-nrg7c" Jan 26 11:13:27 crc kubenswrapper[4619]: I0126 11:13:27.153557 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc689870-cc3b-4d35-968a-78b787569209-config\") pod \"neutron-66768f896-nrg7c\" (UID: \"fc689870-cc3b-4d35-968a-78b787569209\") " pod="openstack/neutron-66768f896-nrg7c" Jan 26 11:13:27 crc kubenswrapper[4619]: I0126 11:13:27.157348 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc689870-cc3b-4d35-968a-78b787569209-ovndb-tls-certs\") pod \"neutron-66768f896-nrg7c\" (UID: \"fc689870-cc3b-4d35-968a-78b787569209\") " pod="openstack/neutron-66768f896-nrg7c" Jan 26 11:13:27 crc kubenswrapper[4619]: I0126 11:13:27.159485 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fc689870-cc3b-4d35-968a-78b787569209-httpd-config\") pod \"neutron-66768f896-nrg7c\" (UID: \"fc689870-cc3b-4d35-968a-78b787569209\") " pod="openstack/neutron-66768f896-nrg7c" Jan 26 11:13:27 crc kubenswrapper[4619]: I0126 11:13:27.182756 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz2dz\" (UniqueName: \"kubernetes.io/projected/fc689870-cc3b-4d35-968a-78b787569209-kube-api-access-zz2dz\") pod \"neutron-66768f896-nrg7c\" (UID: \"fc689870-cc3b-4d35-968a-78b787569209\") " pod="openstack/neutron-66768f896-nrg7c" Jan 26 11:13:27 crc kubenswrapper[4619]: I0126 11:13:27.383868 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66768f896-nrg7c" Jan 26 11:13:27 crc kubenswrapper[4619]: I0126 11:13:27.429028 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1557a03a-428e-4a8c-8ddb-634add71a69f","Type":"ContainerStarted","Data":"ab5adee278e4ae5d1b5ff10c6a64edf4a1f20fa19eaa89096b02e68831b8f1fa"} Jan 26 11:13:27 crc kubenswrapper[4619]: I0126 11:13:27.429911 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1557a03a-428e-4a8c-8ddb-634add71a69f" containerName="glance-log" containerID="cri-o://1b0f40112063bbf62162a3ce7229cb31e0cc42c4e244dc0b3c762d4ad1b5cb9a" gracePeriod=30 Jan 26 11:13:27 crc kubenswrapper[4619]: I0126 11:13:27.431393 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1557a03a-428e-4a8c-8ddb-634add71a69f" containerName="glance-httpd" containerID="cri-o://ab5adee278e4ae5d1b5ff10c6a64edf4a1f20fa19eaa89096b02e68831b8f1fa" gracePeriod=30 Jan 26 11:13:27 crc kubenswrapper[4619]: I0126 11:13:27.478718 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=34.478700057 podStartE2EDuration="34.478700057s" podCreationTimestamp="2026-01-26 11:12:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:13:27.462852589 +0000 UTC m=+1106.496893305" watchObservedRunningTime="2026-01-26 11:13:27.478700057 +0000 UTC m=+1106.512740773" Jan 26 11:13:27 crc kubenswrapper[4619]: E0126 11:13:27.752596 4619 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1557a03a_428e_4a8c_8ddb_634add71a69f.slice/crio-ab5adee278e4ae5d1b5ff10c6a64edf4a1f20fa19eaa89096b02e68831b8f1fa.scope\": RecentStats: unable to find data in memory cache]" Jan 26 11:13:27 crc kubenswrapper[4619]: I0126 11:13:27.817700 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-dqf8s"] Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.041814 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66768f896-nrg7c"] Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.313083 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.466271 4619 generic.go:334] "Generic (PLEG): container finished" podID="1557a03a-428e-4a8c-8ddb-634add71a69f" containerID="ab5adee278e4ae5d1b5ff10c6a64edf4a1f20fa19eaa89096b02e68831b8f1fa" exitCode=143 Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.466529 4619 generic.go:334] "Generic (PLEG): container finished" podID="1557a03a-428e-4a8c-8ddb-634add71a69f" containerID="1b0f40112063bbf62162a3ce7229cb31e0cc42c4e244dc0b3c762d4ad1b5cb9a" exitCode=143 Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.466591 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1557a03a-428e-4a8c-8ddb-634add71a69f","Type":"ContainerDied","Data":"ab5adee278e4ae5d1b5ff10c6a64edf4a1f20fa19eaa89096b02e68831b8f1fa"} Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.466648 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1557a03a-428e-4a8c-8ddb-634add71a69f","Type":"ContainerDied","Data":"1b0f40112063bbf62162a3ce7229cb31e0cc42c4e244dc0b3c762d4ad1b5cb9a"} Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.466658 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1557a03a-428e-4a8c-8ddb-634add71a69f","Type":"ContainerDied","Data":"6d40629ba966c74b319e03e6f742e05c43700ea4f6543eb7849dafa00294d3c4"} Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.466673 4619 scope.go:117] "RemoveContainer" containerID="ab5adee278e4ae5d1b5ff10c6a64edf4a1f20fa19eaa89096b02e68831b8f1fa" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.466811 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.474267 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6838e991-4c5f-4e90-8cd7-2c92dff641fe","Type":"ContainerStarted","Data":"6972a4cfab72cbad67ea69cd51c2eac00e4386665a4c7648b3d1750a1f4b8a5c"} Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.482306 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1557a03a-428e-4a8c-8ddb-634add71a69f-logs\") pod \"1557a03a-428e-4a8c-8ddb-634add71a69f\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.482361 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzvhv\" (UniqueName: \"kubernetes.io/projected/1557a03a-428e-4a8c-8ddb-634add71a69f-kube-api-access-xzvhv\") pod \"1557a03a-428e-4a8c-8ddb-634add71a69f\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.482386 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1557a03a-428e-4a8c-8ddb-634add71a69f-combined-ca-bundle\") pod \"1557a03a-428e-4a8c-8ddb-634add71a69f\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.482509 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1557a03a-428e-4a8c-8ddb-634add71a69f-internal-tls-certs\") pod \"1557a03a-428e-4a8c-8ddb-634add71a69f\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.482528 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1557a03a-428e-4a8c-8ddb-634add71a69f-config-data\") pod \"1557a03a-428e-4a8c-8ddb-634add71a69f\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.482573 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1557a03a-428e-4a8c-8ddb-634add71a69f-httpd-run\") pod \"1557a03a-428e-4a8c-8ddb-634add71a69f\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.482674 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"1557a03a-428e-4a8c-8ddb-634add71a69f\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.482737 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1557a03a-428e-4a8c-8ddb-634add71a69f-scripts\") pod \"1557a03a-428e-4a8c-8ddb-634add71a69f\" (UID: \"1557a03a-428e-4a8c-8ddb-634add71a69f\") " Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.483657 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1557a03a-428e-4a8c-8ddb-634add71a69f-logs" (OuterVolumeSpecName: "logs") pod "1557a03a-428e-4a8c-8ddb-634add71a69f" (UID: "1557a03a-428e-4a8c-8ddb-634add71a69f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.488230 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" event={"ID":"e5d7d811-380d-4e16-b6a6-03e240671a70","Type":"ContainerStarted","Data":"7ddbabd5324293b76155928e1859c212f114d69680ad4666eec29df55ab261bf"} Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.488281 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" event={"ID":"e5d7d811-380d-4e16-b6a6-03e240671a70","Type":"ContainerStarted","Data":"a02dd9504fb6dab2f94f59a6ba9375fac411f2b58199da19866144c97ee4d718"} Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.489187 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1557a03a-428e-4a8c-8ddb-634add71a69f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1557a03a-428e-4a8c-8ddb-634add71a69f" (UID: "1557a03a-428e-4a8c-8ddb-634add71a69f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.494889 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1557a03a-428e-4a8c-8ddb-634add71a69f-kube-api-access-xzvhv" (OuterVolumeSpecName: "kube-api-access-xzvhv") pod "1557a03a-428e-4a8c-8ddb-634add71a69f" (UID: "1557a03a-428e-4a8c-8ddb-634add71a69f"). InnerVolumeSpecName "kube-api-access-xzvhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.504796 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "1557a03a-428e-4a8c-8ddb-634add71a69f" (UID: "1557a03a-428e-4a8c-8ddb-634add71a69f"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.505809 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1557a03a-428e-4a8c-8ddb-634add71a69f-scripts" (OuterVolumeSpecName: "scripts") pod "1557a03a-428e-4a8c-8ddb-634add71a69f" (UID: "1557a03a-428e-4a8c-8ddb-634add71a69f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.511894 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66768f896-nrg7c" event={"ID":"fc689870-cc3b-4d35-968a-78b787569209","Type":"ContainerStarted","Data":"c17b72b668659ba38daf38c72b4480bdb58505522772f2ef14d4461b7fdc3966"} Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.515764 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1557a03a-428e-4a8c-8ddb-634add71a69f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1557a03a-428e-4a8c-8ddb-634add71a69f" (UID: "1557a03a-428e-4a8c-8ddb-634add71a69f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.517541 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=18.517527285 podStartE2EDuration="18.517527285s" podCreationTimestamp="2026-01-26 11:13:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:13:28.516419554 +0000 UTC m=+1107.550460270" watchObservedRunningTime="2026-01-26 11:13:28.517527285 +0000 UTC m=+1107.551568001" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.585110 4619 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.585320 4619 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1557a03a-428e-4a8c-8ddb-634add71a69f-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.585384 4619 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1557a03a-428e-4a8c-8ddb-634add71a69f-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.585437 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzvhv\" (UniqueName: \"kubernetes.io/projected/1557a03a-428e-4a8c-8ddb-634add71a69f-kube-api-access-xzvhv\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.585513 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1557a03a-428e-4a8c-8ddb-634add71a69f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.585591 4619 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1557a03a-428e-4a8c-8ddb-634add71a69f-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.615347 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1557a03a-428e-4a8c-8ddb-634add71a69f-config-data" (OuterVolumeSpecName: "config-data") pod "1557a03a-428e-4a8c-8ddb-634add71a69f" (UID: "1557a03a-428e-4a8c-8ddb-634add71a69f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.630098 4619 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.642876 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1557a03a-428e-4a8c-8ddb-634add71a69f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1557a03a-428e-4a8c-8ddb-634add71a69f" (UID: "1557a03a-428e-4a8c-8ddb-634add71a69f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.687296 4619 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.687326 4619 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1557a03a-428e-4a8c-8ddb-634add71a69f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.687337 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1557a03a-428e-4a8c-8ddb-634add71a69f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.690510 4619 scope.go:117] "RemoveContainer" containerID="1b0f40112063bbf62162a3ce7229cb31e0cc42c4e244dc0b3c762d4ad1b5cb9a" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.746182 4619 scope.go:117] "RemoveContainer" containerID="ab5adee278e4ae5d1b5ff10c6a64edf4a1f20fa19eaa89096b02e68831b8f1fa" Jan 26 11:13:28 crc kubenswrapper[4619]: E0126 11:13:28.746663 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab5adee278e4ae5d1b5ff10c6a64edf4a1f20fa19eaa89096b02e68831b8f1fa\": container with ID starting with ab5adee278e4ae5d1b5ff10c6a64edf4a1f20fa19eaa89096b02e68831b8f1fa not found: ID does not exist" containerID="ab5adee278e4ae5d1b5ff10c6a64edf4a1f20fa19eaa89096b02e68831b8f1fa" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.746689 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab5adee278e4ae5d1b5ff10c6a64edf4a1f20fa19eaa89096b02e68831b8f1fa"} err="failed to get container status \"ab5adee278e4ae5d1b5ff10c6a64edf4a1f20fa19eaa89096b02e68831b8f1fa\": rpc error: code = NotFound desc = could not find container \"ab5adee278e4ae5d1b5ff10c6a64edf4a1f20fa19eaa89096b02e68831b8f1fa\": container with ID starting with ab5adee278e4ae5d1b5ff10c6a64edf4a1f20fa19eaa89096b02e68831b8f1fa not found: ID does not exist" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.746709 4619 scope.go:117] "RemoveContainer" containerID="1b0f40112063bbf62162a3ce7229cb31e0cc42c4e244dc0b3c762d4ad1b5cb9a" Jan 26 11:13:28 crc kubenswrapper[4619]: E0126 11:13:28.748269 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b0f40112063bbf62162a3ce7229cb31e0cc42c4e244dc0b3c762d4ad1b5cb9a\": container with ID starting with 1b0f40112063bbf62162a3ce7229cb31e0cc42c4e244dc0b3c762d4ad1b5cb9a not found: ID does not exist" containerID="1b0f40112063bbf62162a3ce7229cb31e0cc42c4e244dc0b3c762d4ad1b5cb9a" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.748298 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b0f40112063bbf62162a3ce7229cb31e0cc42c4e244dc0b3c762d4ad1b5cb9a"} err="failed to get container status \"1b0f40112063bbf62162a3ce7229cb31e0cc42c4e244dc0b3c762d4ad1b5cb9a\": rpc error: code = NotFound desc = could not find container \"1b0f40112063bbf62162a3ce7229cb31e0cc42c4e244dc0b3c762d4ad1b5cb9a\": container with ID starting with 1b0f40112063bbf62162a3ce7229cb31e0cc42c4e244dc0b3c762d4ad1b5cb9a not found: ID does not exist" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.748316 4619 scope.go:117] "RemoveContainer" containerID="ab5adee278e4ae5d1b5ff10c6a64edf4a1f20fa19eaa89096b02e68831b8f1fa" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.748676 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab5adee278e4ae5d1b5ff10c6a64edf4a1f20fa19eaa89096b02e68831b8f1fa"} err="failed to get container status \"ab5adee278e4ae5d1b5ff10c6a64edf4a1f20fa19eaa89096b02e68831b8f1fa\": rpc error: code = NotFound desc = could not find container \"ab5adee278e4ae5d1b5ff10c6a64edf4a1f20fa19eaa89096b02e68831b8f1fa\": container with ID starting with ab5adee278e4ae5d1b5ff10c6a64edf4a1f20fa19eaa89096b02e68831b8f1fa not found: ID does not exist" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.748716 4619 scope.go:117] "RemoveContainer" containerID="1b0f40112063bbf62162a3ce7229cb31e0cc42c4e244dc0b3c762d4ad1b5cb9a" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.749015 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b0f40112063bbf62162a3ce7229cb31e0cc42c4e244dc0b3c762d4ad1b5cb9a"} err="failed to get container status \"1b0f40112063bbf62162a3ce7229cb31e0cc42c4e244dc0b3c762d4ad1b5cb9a\": rpc error: code = NotFound desc = could not find container \"1b0f40112063bbf62162a3ce7229cb31e0cc42c4e244dc0b3c762d4ad1b5cb9a\": container with ID starting with 1b0f40112063bbf62162a3ce7229cb31e0cc42c4e244dc0b3c762d4ad1b5cb9a not found: ID does not exist" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.807496 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.820555 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.917017 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:13:28 crc kubenswrapper[4619]: E0126 11:13:28.918050 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1557a03a-428e-4a8c-8ddb-634add71a69f" containerName="glance-httpd" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.918071 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="1557a03a-428e-4a8c-8ddb-634add71a69f" containerName="glance-httpd" Jan 26 11:13:28 crc kubenswrapper[4619]: E0126 11:13:28.918100 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1557a03a-428e-4a8c-8ddb-634add71a69f" containerName="glance-log" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.918107 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="1557a03a-428e-4a8c-8ddb-634add71a69f" containerName="glance-log" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.918382 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="1557a03a-428e-4a8c-8ddb-634add71a69f" containerName="glance-log" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.918417 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="1557a03a-428e-4a8c-8ddb-634add71a69f" containerName="glance-httpd" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.938463 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.953329 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.953536 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 11:13:28 crc kubenswrapper[4619]: I0126 11:13:28.965072 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.017450 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1636ba60-de14-4281-aa49-417f7808ccd9-logs\") pod \"glance-default-internal-api-0\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.017516 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1636ba60-de14-4281-aa49-417f7808ccd9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.017563 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn4jj\" (UniqueName: \"kubernetes.io/projected/1636ba60-de14-4281-aa49-417f7808ccd9-kube-api-access-pn4jj\") pod \"glance-default-internal-api-0\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.017636 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1636ba60-de14-4281-aa49-417f7808ccd9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.017661 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.017679 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1636ba60-de14-4281-aa49-417f7808ccd9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.017712 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1636ba60-de14-4281-aa49-417f7808ccd9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.017735 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1636ba60-de14-4281-aa49-417f7808ccd9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.119111 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1636ba60-de14-4281-aa49-417f7808ccd9-logs\") pod \"glance-default-internal-api-0\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.119161 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1636ba60-de14-4281-aa49-417f7808ccd9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.119321 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn4jj\" (UniqueName: \"kubernetes.io/projected/1636ba60-de14-4281-aa49-417f7808ccd9-kube-api-access-pn4jj\") pod \"glance-default-internal-api-0\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.119371 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1636ba60-de14-4281-aa49-417f7808ccd9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.119391 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.119410 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1636ba60-de14-4281-aa49-417f7808ccd9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.119445 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1636ba60-de14-4281-aa49-417f7808ccd9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.119465 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1636ba60-de14-4281-aa49-417f7808ccd9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.120390 4619 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.121077 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1636ba60-de14-4281-aa49-417f7808ccd9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.128649 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1636ba60-de14-4281-aa49-417f7808ccd9-logs\") pod \"glance-default-internal-api-0\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.131056 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1636ba60-de14-4281-aa49-417f7808ccd9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.131839 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1636ba60-de14-4281-aa49-417f7808ccd9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.140319 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1636ba60-de14-4281-aa49-417f7808ccd9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.174219 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1636ba60-de14-4281-aa49-417f7808ccd9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.181676 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn4jj\" (UniqueName: \"kubernetes.io/projected/1636ba60-de14-4281-aa49-417f7808ccd9-kube-api-access-pn4jj\") pod \"glance-default-internal-api-0\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.195488 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.279526 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1557a03a-428e-4a8c-8ddb-634add71a69f" path="/var/lib/kubelet/pods/1557a03a-428e-4a8c-8ddb-634add71a69f/volumes" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.281187 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.536362 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66768f896-nrg7c" event={"ID":"fc689870-cc3b-4d35-968a-78b787569209","Type":"ContainerStarted","Data":"880ffa035f4f1105fb675a7b032ff779645c85f1b2469b696eae34ade35b1fb6"} Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.536668 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66768f896-nrg7c" event={"ID":"fc689870-cc3b-4d35-968a-78b787569209","Type":"ContainerStarted","Data":"8aa91cf4eb39ce34143f69a832c7b185190ef5c8466a9ab8a35cbc86e6566305"} Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.536806 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-66768f896-nrg7c" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.543768 4619 generic.go:334] "Generic (PLEG): container finished" podID="9d5b79a0-cb51-483b-96b0-8f85b385692c" containerID="9dc902fa13ed66ab7c2dcd87b7b6ea31f0c59da47277b4b5f9c421710efa76de" exitCode=0 Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.543872 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-f7nh4" event={"ID":"9d5b79a0-cb51-483b-96b0-8f85b385692c","Type":"ContainerDied","Data":"9dc902fa13ed66ab7c2dcd87b7b6ea31f0c59da47277b4b5f9c421710efa76de"} Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.554366 4619 generic.go:334] "Generic (PLEG): container finished" podID="e5d7d811-380d-4e16-b6a6-03e240671a70" containerID="7ddbabd5324293b76155928e1859c212f114d69680ad4666eec29df55ab261bf" exitCode=0 Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.554454 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" event={"ID":"e5d7d811-380d-4e16-b6a6-03e240671a70","Type":"ContainerDied","Data":"7ddbabd5324293b76155928e1859c212f114d69680ad4666eec29df55ab261bf"} Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.663402 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-66768f896-nrg7c" podStartSLOduration=3.663387808 podStartE2EDuration="3.663387808s" podCreationTimestamp="2026-01-26 11:13:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:13:29.589138432 +0000 UTC m=+1108.623179148" watchObservedRunningTime="2026-01-26 11:13:29.663387808 +0000 UTC m=+1108.697428524" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.932388 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6f447695c7-n4pxf"] Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.935699 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f447695c7-n4pxf" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.946346 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.946538 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 26 11:13:29 crc kubenswrapper[4619]: I0126 11:13:29.963905 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f447695c7-n4pxf"] Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.052948 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-ovndb-tls-certs\") pod \"neutron-6f447695c7-n4pxf\" (UID: \"6b43b93b-1dc4-4498-b464-30609f8788c3\") " pod="openstack/neutron-6f447695c7-n4pxf" Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.053049 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npm9v\" (UniqueName: \"kubernetes.io/projected/6b43b93b-1dc4-4498-b464-30609f8788c3-kube-api-access-npm9v\") pod \"neutron-6f447695c7-n4pxf\" (UID: \"6b43b93b-1dc4-4498-b464-30609f8788c3\") " pod="openstack/neutron-6f447695c7-n4pxf" Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.053076 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-combined-ca-bundle\") pod \"neutron-6f447695c7-n4pxf\" (UID: \"6b43b93b-1dc4-4498-b464-30609f8788c3\") " pod="openstack/neutron-6f447695c7-n4pxf" Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.053128 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-internal-tls-certs\") pod \"neutron-6f447695c7-n4pxf\" (UID: \"6b43b93b-1dc4-4498-b464-30609f8788c3\") " pod="openstack/neutron-6f447695c7-n4pxf" Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.053160 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-public-tls-certs\") pod \"neutron-6f447695c7-n4pxf\" (UID: \"6b43b93b-1dc4-4498-b464-30609f8788c3\") " pod="openstack/neutron-6f447695c7-n4pxf" Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.053179 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-httpd-config\") pod \"neutron-6f447695c7-n4pxf\" (UID: \"6b43b93b-1dc4-4498-b464-30609f8788c3\") " pod="openstack/neutron-6f447695c7-n4pxf" Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.053205 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-config\") pod \"neutron-6f447695c7-n4pxf\" (UID: \"6b43b93b-1dc4-4498-b464-30609f8788c3\") " pod="openstack/neutron-6f447695c7-n4pxf" Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.154846 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-config\") pod \"neutron-6f447695c7-n4pxf\" (UID: \"6b43b93b-1dc4-4498-b464-30609f8788c3\") " pod="openstack/neutron-6f447695c7-n4pxf" Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.155454 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-ovndb-tls-certs\") pod \"neutron-6f447695c7-n4pxf\" (UID: \"6b43b93b-1dc4-4498-b464-30609f8788c3\") " pod="openstack/neutron-6f447695c7-n4pxf" Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.155749 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npm9v\" (UniqueName: \"kubernetes.io/projected/6b43b93b-1dc4-4498-b464-30609f8788c3-kube-api-access-npm9v\") pod \"neutron-6f447695c7-n4pxf\" (UID: \"6b43b93b-1dc4-4498-b464-30609f8788c3\") " pod="openstack/neutron-6f447695c7-n4pxf" Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.156305 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-combined-ca-bundle\") pod \"neutron-6f447695c7-n4pxf\" (UID: \"6b43b93b-1dc4-4498-b464-30609f8788c3\") " pod="openstack/neutron-6f447695c7-n4pxf" Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.156728 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-internal-tls-certs\") pod \"neutron-6f447695c7-n4pxf\" (UID: \"6b43b93b-1dc4-4498-b464-30609f8788c3\") " pod="openstack/neutron-6f447695c7-n4pxf" Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.156803 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-public-tls-certs\") pod \"neutron-6f447695c7-n4pxf\" (UID: \"6b43b93b-1dc4-4498-b464-30609f8788c3\") " pod="openstack/neutron-6f447695c7-n4pxf" Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.156845 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-httpd-config\") pod \"neutron-6f447695c7-n4pxf\" (UID: \"6b43b93b-1dc4-4498-b464-30609f8788c3\") " pod="openstack/neutron-6f447695c7-n4pxf" Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.163585 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-public-tls-certs\") pod \"neutron-6f447695c7-n4pxf\" (UID: \"6b43b93b-1dc4-4498-b464-30609f8788c3\") " pod="openstack/neutron-6f447695c7-n4pxf" Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.164059 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-internal-tls-certs\") pod \"neutron-6f447695c7-n4pxf\" (UID: \"6b43b93b-1dc4-4498-b464-30609f8788c3\") " pod="openstack/neutron-6f447695c7-n4pxf" Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.168389 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-combined-ca-bundle\") pod \"neutron-6f447695c7-n4pxf\" (UID: \"6b43b93b-1dc4-4498-b464-30609f8788c3\") " pod="openstack/neutron-6f447695c7-n4pxf" Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.169132 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-config\") pod \"neutron-6f447695c7-n4pxf\" (UID: \"6b43b93b-1dc4-4498-b464-30609f8788c3\") " pod="openstack/neutron-6f447695c7-n4pxf" Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.169910 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-httpd-config\") pod \"neutron-6f447695c7-n4pxf\" (UID: \"6b43b93b-1dc4-4498-b464-30609f8788c3\") " pod="openstack/neutron-6f447695c7-n4pxf" Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.170690 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-ovndb-tls-certs\") pod \"neutron-6f447695c7-n4pxf\" (UID: \"6b43b93b-1dc4-4498-b464-30609f8788c3\") " pod="openstack/neutron-6f447695c7-n4pxf" Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.183361 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npm9v\" (UniqueName: \"kubernetes.io/projected/6b43b93b-1dc4-4498-b464-30609f8788c3-kube-api-access-npm9v\") pod \"neutron-6f447695c7-n4pxf\" (UID: \"6b43b93b-1dc4-4498-b464-30609f8788c3\") " pod="openstack/neutron-6f447695c7-n4pxf" Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.288037 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f447695c7-n4pxf" Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.298233 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:13:30 crc kubenswrapper[4619]: W0126 11:13:30.308836 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1636ba60_de14_4281_aa49_417f7808ccd9.slice/crio-262aa5a31759bf007a83ec9ffdeb36caea0d08c02970e4631d7cab7f9018c434 WatchSource:0}: Error finding container 262aa5a31759bf007a83ec9ffdeb36caea0d08c02970e4631d7cab7f9018c434: Status 404 returned error can't find the container with id 262aa5a31759bf007a83ec9ffdeb36caea0d08c02970e4631d7cab7f9018c434 Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.554045 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.554507 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.586228 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1636ba60-de14-4281-aa49-417f7808ccd9","Type":"ContainerStarted","Data":"262aa5a31759bf007a83ec9ffdeb36caea0d08c02970e4631d7cab7f9018c434"} Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.595454 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" event={"ID":"e5d7d811-380d-4e16-b6a6-03e240671a70","Type":"ContainerStarted","Data":"2d82e4bedd98589587b61f6251c5fcb80eb42e0139c15088f91fe8151b004229"} Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.595853 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.595870 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.650210 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" podStartSLOduration=4.650190734 podStartE2EDuration="4.650190734s" podCreationTimestamp="2026-01-26 11:13:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:13:30.625966603 +0000 UTC m=+1109.660007319" watchObservedRunningTime="2026-01-26 11:13:30.650190734 +0000 UTC m=+1109.684231450" Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.740205 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 11:13:30 crc kubenswrapper[4619]: I0126 11:13:30.933008 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f447695c7-n4pxf"] Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.231714 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-f7nh4" Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.288735 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d5b79a0-cb51-483b-96b0-8f85b385692c-config-data\") pod \"9d5b79a0-cb51-483b-96b0-8f85b385692c\" (UID: \"9d5b79a0-cb51-483b-96b0-8f85b385692c\") " Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.288803 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d5b79a0-cb51-483b-96b0-8f85b385692c-combined-ca-bundle\") pod \"9d5b79a0-cb51-483b-96b0-8f85b385692c\" (UID: \"9d5b79a0-cb51-483b-96b0-8f85b385692c\") " Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.288873 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d5b79a0-cb51-483b-96b0-8f85b385692c-logs\") pod \"9d5b79a0-cb51-483b-96b0-8f85b385692c\" (UID: \"9d5b79a0-cb51-483b-96b0-8f85b385692c\") " Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.288900 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d5b79a0-cb51-483b-96b0-8f85b385692c-scripts\") pod \"9d5b79a0-cb51-483b-96b0-8f85b385692c\" (UID: \"9d5b79a0-cb51-483b-96b0-8f85b385692c\") " Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.288997 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwktb\" (UniqueName: \"kubernetes.io/projected/9d5b79a0-cb51-483b-96b0-8f85b385692c-kube-api-access-lwktb\") pod \"9d5b79a0-cb51-483b-96b0-8f85b385692c\" (UID: \"9d5b79a0-cb51-483b-96b0-8f85b385692c\") " Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.289949 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d5b79a0-cb51-483b-96b0-8f85b385692c-logs" (OuterVolumeSpecName: "logs") pod "9d5b79a0-cb51-483b-96b0-8f85b385692c" (UID: "9d5b79a0-cb51-483b-96b0-8f85b385692c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.294475 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d5b79a0-cb51-483b-96b0-8f85b385692c-kube-api-access-lwktb" (OuterVolumeSpecName: "kube-api-access-lwktb") pod "9d5b79a0-cb51-483b-96b0-8f85b385692c" (UID: "9d5b79a0-cb51-483b-96b0-8f85b385692c"). InnerVolumeSpecName "kube-api-access-lwktb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.298857 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d5b79a0-cb51-483b-96b0-8f85b385692c-scripts" (OuterVolumeSpecName: "scripts") pod "9d5b79a0-cb51-483b-96b0-8f85b385692c" (UID: "9d5b79a0-cb51-483b-96b0-8f85b385692c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.358016 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d5b79a0-cb51-483b-96b0-8f85b385692c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d5b79a0-cb51-483b-96b0-8f85b385692c" (UID: "9d5b79a0-cb51-483b-96b0-8f85b385692c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.392593 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d5b79a0-cb51-483b-96b0-8f85b385692c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.392635 4619 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d5b79a0-cb51-483b-96b0-8f85b385692c-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.392644 4619 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d5b79a0-cb51-483b-96b0-8f85b385692c-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.392652 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwktb\" (UniqueName: \"kubernetes.io/projected/9d5b79a0-cb51-483b-96b0-8f85b385692c-kube-api-access-lwktb\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.420854 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d5b79a0-cb51-483b-96b0-8f85b385692c-config-data" (OuterVolumeSpecName: "config-data") pod "9d5b79a0-cb51-483b-96b0-8f85b385692c" (UID: "9d5b79a0-cb51-483b-96b0-8f85b385692c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.504297 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d5b79a0-cb51-483b-96b0-8f85b385692c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.614002 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-f7nh4" event={"ID":"9d5b79a0-cb51-483b-96b0-8f85b385692c","Type":"ContainerDied","Data":"57e23cace7f90d9465bbaf9c16acb7940da71a536303017e5f04dc7bd9888cff"} Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.614044 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57e23cace7f90d9465bbaf9c16acb7940da71a536303017e5f04dc7bd9888cff" Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.614136 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-f7nh4" Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.629853 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f447695c7-n4pxf" event={"ID":"6b43b93b-1dc4-4498-b464-30609f8788c3","Type":"ContainerStarted","Data":"838fbd2d77860d0edc5a5c4a75081b4b8e8dc68d7a8c941401e4462ad4461361"} Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.638735 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1636ba60-de14-4281-aa49-417f7808ccd9","Type":"ContainerStarted","Data":"60cf38830dc90720683823c7a047291b0c15ebaba4a85f20552240ab4c94a454"} Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.645320 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.645430 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.805357 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-846c8954d4-fg4cj"] Jan 26 11:13:31 crc kubenswrapper[4619]: E0126 11:13:31.805718 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d5b79a0-cb51-483b-96b0-8f85b385692c" containerName="placement-db-sync" Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.805731 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d5b79a0-cb51-483b-96b0-8f85b385692c" containerName="placement-db-sync" Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.805875 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d5b79a0-cb51-483b-96b0-8f85b385692c" containerName="placement-db-sync" Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.807100 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-846c8954d4-fg4cj" Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.813266 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-dcfmk" Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.813494 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.813682 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.813863 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.813962 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.832955 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-846c8954d4-fg4cj"] Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.945489 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9d0bb40-3939-4f19-b3e8-f31e6bb0b381-config-data\") pod \"placement-846c8954d4-fg4cj\" (UID: \"e9d0bb40-3939-4f19-b3e8-f31e6bb0b381\") " pod="openstack/placement-846c8954d4-fg4cj" Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.945876 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d0bb40-3939-4f19-b3e8-f31e6bb0b381-combined-ca-bundle\") pod \"placement-846c8954d4-fg4cj\" (UID: \"e9d0bb40-3939-4f19-b3e8-f31e6bb0b381\") " pod="openstack/placement-846c8954d4-fg4cj" Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.945924 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9d0bb40-3939-4f19-b3e8-f31e6bb0b381-scripts\") pod \"placement-846c8954d4-fg4cj\" (UID: \"e9d0bb40-3939-4f19-b3e8-f31e6bb0b381\") " pod="openstack/placement-846c8954d4-fg4cj" Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.946397 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9d0bb40-3939-4f19-b3e8-f31e6bb0b381-logs\") pod \"placement-846c8954d4-fg4cj\" (UID: \"e9d0bb40-3939-4f19-b3e8-f31e6bb0b381\") " pod="openstack/placement-846c8954d4-fg4cj" Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.946441 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9d0bb40-3939-4f19-b3e8-f31e6bb0b381-internal-tls-certs\") pod \"placement-846c8954d4-fg4cj\" (UID: \"e9d0bb40-3939-4f19-b3e8-f31e6bb0b381\") " pod="openstack/placement-846c8954d4-fg4cj" Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.946496 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgzzp\" (UniqueName: \"kubernetes.io/projected/e9d0bb40-3939-4f19-b3e8-f31e6bb0b381-kube-api-access-lgzzp\") pod \"placement-846c8954d4-fg4cj\" (UID: \"e9d0bb40-3939-4f19-b3e8-f31e6bb0b381\") " pod="openstack/placement-846c8954d4-fg4cj" Jan 26 11:13:31 crc kubenswrapper[4619]: I0126 11:13:31.946602 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9d0bb40-3939-4f19-b3e8-f31e6bb0b381-public-tls-certs\") pod \"placement-846c8954d4-fg4cj\" (UID: \"e9d0bb40-3939-4f19-b3e8-f31e6bb0b381\") " pod="openstack/placement-846c8954d4-fg4cj" Jan 26 11:13:32 crc kubenswrapper[4619]: I0126 11:13:32.048586 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9d0bb40-3939-4f19-b3e8-f31e6bb0b381-config-data\") pod \"placement-846c8954d4-fg4cj\" (UID: \"e9d0bb40-3939-4f19-b3e8-f31e6bb0b381\") " pod="openstack/placement-846c8954d4-fg4cj" Jan 26 11:13:32 crc kubenswrapper[4619]: I0126 11:13:32.048732 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d0bb40-3939-4f19-b3e8-f31e6bb0b381-combined-ca-bundle\") pod \"placement-846c8954d4-fg4cj\" (UID: \"e9d0bb40-3939-4f19-b3e8-f31e6bb0b381\") " pod="openstack/placement-846c8954d4-fg4cj" Jan 26 11:13:32 crc kubenswrapper[4619]: I0126 11:13:32.048787 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9d0bb40-3939-4f19-b3e8-f31e6bb0b381-scripts\") pod \"placement-846c8954d4-fg4cj\" (UID: \"e9d0bb40-3939-4f19-b3e8-f31e6bb0b381\") " pod="openstack/placement-846c8954d4-fg4cj" Jan 26 11:13:32 crc kubenswrapper[4619]: I0126 11:13:32.048837 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9d0bb40-3939-4f19-b3e8-f31e6bb0b381-logs\") pod \"placement-846c8954d4-fg4cj\" (UID: \"e9d0bb40-3939-4f19-b3e8-f31e6bb0b381\") " pod="openstack/placement-846c8954d4-fg4cj" Jan 26 11:13:32 crc kubenswrapper[4619]: I0126 11:13:32.048853 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9d0bb40-3939-4f19-b3e8-f31e6bb0b381-internal-tls-certs\") pod \"placement-846c8954d4-fg4cj\" (UID: \"e9d0bb40-3939-4f19-b3e8-f31e6bb0b381\") " pod="openstack/placement-846c8954d4-fg4cj" Jan 26 11:13:32 crc kubenswrapper[4619]: I0126 11:13:32.048873 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgzzp\" (UniqueName: \"kubernetes.io/projected/e9d0bb40-3939-4f19-b3e8-f31e6bb0b381-kube-api-access-lgzzp\") pod \"placement-846c8954d4-fg4cj\" (UID: \"e9d0bb40-3939-4f19-b3e8-f31e6bb0b381\") " pod="openstack/placement-846c8954d4-fg4cj" Jan 26 11:13:32 crc kubenswrapper[4619]: I0126 11:13:32.048904 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9d0bb40-3939-4f19-b3e8-f31e6bb0b381-public-tls-certs\") pod \"placement-846c8954d4-fg4cj\" (UID: \"e9d0bb40-3939-4f19-b3e8-f31e6bb0b381\") " pod="openstack/placement-846c8954d4-fg4cj" Jan 26 11:13:32 crc kubenswrapper[4619]: I0126 11:13:32.049723 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9d0bb40-3939-4f19-b3e8-f31e6bb0b381-logs\") pod \"placement-846c8954d4-fg4cj\" (UID: \"e9d0bb40-3939-4f19-b3e8-f31e6bb0b381\") " pod="openstack/placement-846c8954d4-fg4cj" Jan 26 11:13:32 crc kubenswrapper[4619]: I0126 11:13:32.053920 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9d0bb40-3939-4f19-b3e8-f31e6bb0b381-public-tls-certs\") pod \"placement-846c8954d4-fg4cj\" (UID: \"e9d0bb40-3939-4f19-b3e8-f31e6bb0b381\") " pod="openstack/placement-846c8954d4-fg4cj" Jan 26 11:13:32 crc kubenswrapper[4619]: I0126 11:13:32.054644 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9d0bb40-3939-4f19-b3e8-f31e6bb0b381-internal-tls-certs\") pod \"placement-846c8954d4-fg4cj\" (UID: \"e9d0bb40-3939-4f19-b3e8-f31e6bb0b381\") " pod="openstack/placement-846c8954d4-fg4cj" Jan 26 11:13:32 crc kubenswrapper[4619]: I0126 11:13:32.054905 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9d0bb40-3939-4f19-b3e8-f31e6bb0b381-scripts\") pod \"placement-846c8954d4-fg4cj\" (UID: \"e9d0bb40-3939-4f19-b3e8-f31e6bb0b381\") " pod="openstack/placement-846c8954d4-fg4cj" Jan 26 11:13:32 crc kubenswrapper[4619]: I0126 11:13:32.055302 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d0bb40-3939-4f19-b3e8-f31e6bb0b381-combined-ca-bundle\") pod \"placement-846c8954d4-fg4cj\" (UID: \"e9d0bb40-3939-4f19-b3e8-f31e6bb0b381\") " pod="openstack/placement-846c8954d4-fg4cj" Jan 26 11:13:32 crc kubenswrapper[4619]: I0126 11:13:32.062262 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9d0bb40-3939-4f19-b3e8-f31e6bb0b381-config-data\") pod \"placement-846c8954d4-fg4cj\" (UID: \"e9d0bb40-3939-4f19-b3e8-f31e6bb0b381\") " pod="openstack/placement-846c8954d4-fg4cj" Jan 26 11:13:32 crc kubenswrapper[4619]: I0126 11:13:32.081103 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgzzp\" (UniqueName: \"kubernetes.io/projected/e9d0bb40-3939-4f19-b3e8-f31e6bb0b381-kube-api-access-lgzzp\") pod \"placement-846c8954d4-fg4cj\" (UID: \"e9d0bb40-3939-4f19-b3e8-f31e6bb0b381\") " pod="openstack/placement-846c8954d4-fg4cj" Jan 26 11:13:32 crc kubenswrapper[4619]: I0126 11:13:32.162604 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-846c8954d4-fg4cj" Jan 26 11:13:32 crc kubenswrapper[4619]: I0126 11:13:32.719598 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f447695c7-n4pxf" event={"ID":"6b43b93b-1dc4-4498-b464-30609f8788c3","Type":"ContainerStarted","Data":"a4133e694ff84cac8215b31d214b2b572d80ad955195facd4b64de2c6d5185c6"} Jan 26 11:13:32 crc kubenswrapper[4619]: I0126 11:13:32.719926 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f447695c7-n4pxf" event={"ID":"6b43b93b-1dc4-4498-b464-30609f8788c3","Type":"ContainerStarted","Data":"499daccc20383486ab3bbd336b1ebd266ea8d7366c1d923903ebeb85b451bdce"} Jan 26 11:13:32 crc kubenswrapper[4619]: I0126 11:13:32.720736 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6f447695c7-n4pxf" Jan 26 11:13:32 crc kubenswrapper[4619]: I0126 11:13:32.776368 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6f447695c7-n4pxf" podStartSLOduration=3.776340793 podStartE2EDuration="3.776340793s" podCreationTimestamp="2026-01-26 11:13:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:13:32.765687437 +0000 UTC m=+1111.799728153" watchObservedRunningTime="2026-01-26 11:13:32.776340793 +0000 UTC m=+1111.810381509" Jan 26 11:13:32 crc kubenswrapper[4619]: I0126 11:13:32.830321 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-846c8954d4-fg4cj"] Jan 26 11:13:33 crc kubenswrapper[4619]: I0126 11:13:33.741005 4619 generic.go:334] "Generic (PLEG): container finished" podID="3628aeb0-1e2e-4275-a914-31e18f47a989" containerID="8ace1614ebcd4e162c21785ba58440191b5b058d2f8d798ed1d440fddc61c40a" exitCode=0 Jan 26 11:13:33 crc kubenswrapper[4619]: I0126 11:13:33.741706 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ltrbg" event={"ID":"3628aeb0-1e2e-4275-a914-31e18f47a989","Type":"ContainerDied","Data":"8ace1614ebcd4e162c21785ba58440191b5b058d2f8d798ed1d440fddc61c40a"} Jan 26 11:13:33 crc kubenswrapper[4619]: I0126 11:13:33.745557 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1636ba60-de14-4281-aa49-417f7808ccd9","Type":"ContainerStarted","Data":"b04ba2db6303a3e986a00acb703a7c20212ca7681a0d9ac6544f8f12ae68f5a5"} Jan 26 11:13:33 crc kubenswrapper[4619]: I0126 11:13:33.757492 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-846c8954d4-fg4cj" event={"ID":"e9d0bb40-3939-4f19-b3e8-f31e6bb0b381","Type":"ContainerStarted","Data":"d2340aed7cd9e61cb227c343e574faf6d6c9d0d899c772ad0d9fb80c70c307d6"} Jan 26 11:13:33 crc kubenswrapper[4619]: I0126 11:13:33.757538 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-846c8954d4-fg4cj" event={"ID":"e9d0bb40-3939-4f19-b3e8-f31e6bb0b381","Type":"ContainerStarted","Data":"34d3c718d039c8d8551e308c5be98c1948f67d637cddfe580e69acb7110c2ab3"} Jan 26 11:13:33 crc kubenswrapper[4619]: I0126 11:13:33.757550 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-846c8954d4-fg4cj" event={"ID":"e9d0bb40-3939-4f19-b3e8-f31e6bb0b381","Type":"ContainerStarted","Data":"d57854f6383cce28949d1062043812c5b01df85e736c17202bddf61b2a7bc5db"} Jan 26 11:13:33 crc kubenswrapper[4619]: I0126 11:13:33.757600 4619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 11:13:34 crc kubenswrapper[4619]: I0126 11:13:34.279831 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.279812322 podStartE2EDuration="6.279812322s" podCreationTimestamp="2026-01-26 11:13:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:13:33.7978159 +0000 UTC m=+1112.831856616" watchObservedRunningTime="2026-01-26 11:13:34.279812322 +0000 UTC m=+1113.313853038" Jan 26 11:13:34 crc kubenswrapper[4619]: I0126 11:13:34.765784 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-846c8954d4-fg4cj" Jan 26 11:13:34 crc kubenswrapper[4619]: I0126 11:13:34.765837 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-846c8954d4-fg4cj" Jan 26 11:13:34 crc kubenswrapper[4619]: I0126 11:13:34.806291 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-846c8954d4-fg4cj" podStartSLOduration=3.806270256 podStartE2EDuration="3.806270256s" podCreationTimestamp="2026-01-26 11:13:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:13:34.797559925 +0000 UTC m=+1113.831600651" watchObservedRunningTime="2026-01-26 11:13:34.806270256 +0000 UTC m=+1113.840310972" Jan 26 11:13:35 crc kubenswrapper[4619]: I0126 11:13:35.102215 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6f67c775d4-7ls4r" Jan 26 11:13:35 crc kubenswrapper[4619]: I0126 11:13:35.102480 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6f67c775d4-7ls4r" Jan 26 11:13:35 crc kubenswrapper[4619]: I0126 11:13:35.306147 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-846d64d6c4-66jvl" Jan 26 11:13:35 crc kubenswrapper[4619]: I0126 11:13:35.306213 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-846d64d6c4-66jvl" Jan 26 11:13:36 crc kubenswrapper[4619]: I0126 11:13:36.978141 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ltrbg" Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.066692 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3628aeb0-1e2e-4275-a914-31e18f47a989-combined-ca-bundle\") pod \"3628aeb0-1e2e-4275-a914-31e18f47a989\" (UID: \"3628aeb0-1e2e-4275-a914-31e18f47a989\") " Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.066962 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3628aeb0-1e2e-4275-a914-31e18f47a989-config-data\") pod \"3628aeb0-1e2e-4275-a914-31e18f47a989\" (UID: \"3628aeb0-1e2e-4275-a914-31e18f47a989\") " Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.066988 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3628aeb0-1e2e-4275-a914-31e18f47a989-scripts\") pod \"3628aeb0-1e2e-4275-a914-31e18f47a989\" (UID: \"3628aeb0-1e2e-4275-a914-31e18f47a989\") " Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.067019 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3628aeb0-1e2e-4275-a914-31e18f47a989-credential-keys\") pod \"3628aeb0-1e2e-4275-a914-31e18f47a989\" (UID: \"3628aeb0-1e2e-4275-a914-31e18f47a989\") " Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.067037 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb9bv\" (UniqueName: \"kubernetes.io/projected/3628aeb0-1e2e-4275-a914-31e18f47a989-kube-api-access-wb9bv\") pod \"3628aeb0-1e2e-4275-a914-31e18f47a989\" (UID: \"3628aeb0-1e2e-4275-a914-31e18f47a989\") " Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.067069 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3628aeb0-1e2e-4275-a914-31e18f47a989-fernet-keys\") pod \"3628aeb0-1e2e-4275-a914-31e18f47a989\" (UID: \"3628aeb0-1e2e-4275-a914-31e18f47a989\") " Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.082838 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3628aeb0-1e2e-4275-a914-31e18f47a989-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "3628aeb0-1e2e-4275-a914-31e18f47a989" (UID: "3628aeb0-1e2e-4275-a914-31e18f47a989"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.084853 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.097928 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3628aeb0-1e2e-4275-a914-31e18f47a989-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3628aeb0-1e2e-4275-a914-31e18f47a989" (UID: "3628aeb0-1e2e-4275-a914-31e18f47a989"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.098075 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3628aeb0-1e2e-4275-a914-31e18f47a989-scripts" (OuterVolumeSpecName: "scripts") pod "3628aeb0-1e2e-4275-a914-31e18f47a989" (UID: "3628aeb0-1e2e-4275-a914-31e18f47a989"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.102087 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3628aeb0-1e2e-4275-a914-31e18f47a989-kube-api-access-wb9bv" (OuterVolumeSpecName: "kube-api-access-wb9bv") pod "3628aeb0-1e2e-4275-a914-31e18f47a989" (UID: "3628aeb0-1e2e-4275-a914-31e18f47a989"). InnerVolumeSpecName "kube-api-access-wb9bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.110701 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3628aeb0-1e2e-4275-a914-31e18f47a989-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3628aeb0-1e2e-4275-a914-31e18f47a989" (UID: "3628aeb0-1e2e-4275-a914-31e18f47a989"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.170946 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3628aeb0-1e2e-4275-a914-31e18f47a989-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.170986 4619 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3628aeb0-1e2e-4275-a914-31e18f47a989-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.170998 4619 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3628aeb0-1e2e-4275-a914-31e18f47a989-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.171012 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb9bv\" (UniqueName: \"kubernetes.io/projected/3628aeb0-1e2e-4275-a914-31e18f47a989-kube-api-access-wb9bv\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.171026 4619 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3628aeb0-1e2e-4275-a914-31e18f47a989-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.190381 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-s54b4"] Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.190597 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-s54b4" podUID="74480c10-4eeb-4a66-99a0-82c49acf75d4" containerName="dnsmasq-dns" containerID="cri-o://81ff015cadea4ceebbc379201a125a8398879a2727da4f4bffcf76c804875e13" gracePeriod=10 Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.193246 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3628aeb0-1e2e-4275-a914-31e18f47a989-config-data" (OuterVolumeSpecName: "config-data") pod "3628aeb0-1e2e-4275-a914-31e18f47a989" (UID: "3628aeb0-1e2e-4275-a914-31e18f47a989"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.274478 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3628aeb0-1e2e-4275-a914-31e18f47a989-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.369122 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.369235 4619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.371878 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.847064 4619 generic.go:334] "Generic (PLEG): container finished" podID="74480c10-4eeb-4a66-99a0-82c49acf75d4" containerID="81ff015cadea4ceebbc379201a125a8398879a2727da4f4bffcf76c804875e13" exitCode=0 Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.847109 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-s54b4" event={"ID":"74480c10-4eeb-4a66-99a0-82c49acf75d4","Type":"ContainerDied","Data":"81ff015cadea4ceebbc379201a125a8398879a2727da4f4bffcf76c804875e13"} Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.849050 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-s54b4" Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.851977 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ltrbg" event={"ID":"3628aeb0-1e2e-4275-a914-31e18f47a989","Type":"ContainerDied","Data":"66d8942d6a7d67cb2d492d42166ca9c5586f79270adc786707236dbd76523d58"} Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.852015 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66d8942d6a7d67cb2d492d42166ca9c5586f79270adc786707236dbd76523d58" Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.852084 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ltrbg" Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.903003 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vmzjm" event={"ID":"4fe185f3-c64d-47a7-9c93-f40ef8d24d9e","Type":"ContainerStarted","Data":"fb61a5a65be44d00f88b4c6db055e747ee591a7ea7e16082937f513414454156"} Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.925628 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-vmzjm" podStartSLOduration=4.347964784 podStartE2EDuration="53.925601409s" podCreationTimestamp="2026-01-26 11:12:44 +0000 UTC" firstStartedPulling="2026-01-26 11:12:47.472225803 +0000 UTC m=+1066.506266519" lastFinishedPulling="2026-01-26 11:13:37.049862438 +0000 UTC m=+1116.083903144" observedRunningTime="2026-01-26 11:13:37.918660206 +0000 UTC m=+1116.952700922" watchObservedRunningTime="2026-01-26 11:13:37.925601409 +0000 UTC m=+1116.959642125" Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.957988 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a59562e9-8459-4c22-a737-f6bde480fc2b","Type":"ContainerStarted","Data":"79cb191465b8a7bf9e0cfebec9c66123e90081bcbde3d3834fed2928b741e6a6"} Jan 26 11:13:37 crc kubenswrapper[4619]: I0126 11:13:37.999117 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74480c10-4eeb-4a66-99a0-82c49acf75d4-ovsdbserver-sb\") pod \"74480c10-4eeb-4a66-99a0-82c49acf75d4\" (UID: \"74480c10-4eeb-4a66-99a0-82c49acf75d4\") " Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:37.999238 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/74480c10-4eeb-4a66-99a0-82c49acf75d4-dns-swift-storage-0\") pod \"74480c10-4eeb-4a66-99a0-82c49acf75d4\" (UID: \"74480c10-4eeb-4a66-99a0-82c49acf75d4\") " Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:37.999263 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74480c10-4eeb-4a66-99a0-82c49acf75d4-config\") pod \"74480c10-4eeb-4a66-99a0-82c49acf75d4\" (UID: \"74480c10-4eeb-4a66-99a0-82c49acf75d4\") " Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:37.999328 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzxkg\" (UniqueName: \"kubernetes.io/projected/74480c10-4eeb-4a66-99a0-82c49acf75d4-kube-api-access-hzxkg\") pod \"74480c10-4eeb-4a66-99a0-82c49acf75d4\" (UID: \"74480c10-4eeb-4a66-99a0-82c49acf75d4\") " Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:37.999380 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74480c10-4eeb-4a66-99a0-82c49acf75d4-dns-svc\") pod \"74480c10-4eeb-4a66-99a0-82c49acf75d4\" (UID: \"74480c10-4eeb-4a66-99a0-82c49acf75d4\") " Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:37.999401 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74480c10-4eeb-4a66-99a0-82c49acf75d4-ovsdbserver-nb\") pod \"74480c10-4eeb-4a66-99a0-82c49acf75d4\" (UID: \"74480c10-4eeb-4a66-99a0-82c49acf75d4\") " Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.079559 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74480c10-4eeb-4a66-99a0-82c49acf75d4-kube-api-access-hzxkg" (OuterVolumeSpecName: "kube-api-access-hzxkg") pod "74480c10-4eeb-4a66-99a0-82c49acf75d4" (UID: "74480c10-4eeb-4a66-99a0-82c49acf75d4"). InnerVolumeSpecName "kube-api-access-hzxkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.209637 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-75ddf854f7-wtpq9"] Jan 26 11:13:38 crc kubenswrapper[4619]: E0126 11:13:38.209997 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74480c10-4eeb-4a66-99a0-82c49acf75d4" containerName="dnsmasq-dns" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.210014 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="74480c10-4eeb-4a66-99a0-82c49acf75d4" containerName="dnsmasq-dns" Jan 26 11:13:38 crc kubenswrapper[4619]: E0126 11:13:38.210028 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3628aeb0-1e2e-4275-a914-31e18f47a989" containerName="keystone-bootstrap" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.210034 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="3628aeb0-1e2e-4275-a914-31e18f47a989" containerName="keystone-bootstrap" Jan 26 11:13:38 crc kubenswrapper[4619]: E0126 11:13:38.210055 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74480c10-4eeb-4a66-99a0-82c49acf75d4" containerName="init" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.210061 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="74480c10-4eeb-4a66-99a0-82c49acf75d4" containerName="init" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.210236 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="74480c10-4eeb-4a66-99a0-82c49acf75d4" containerName="dnsmasq-dns" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.214816 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="3628aeb0-1e2e-4275-a914-31e18f47a989" containerName="keystone-bootstrap" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.215539 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-75ddf854f7-wtpq9" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.219656 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzxkg\" (UniqueName: \"kubernetes.io/projected/74480c10-4eeb-4a66-99a0-82c49acf75d4-kube-api-access-hzxkg\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.220144 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.220369 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.222096 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.229380 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.229721 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2hx4x" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.229893 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.235486 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-75ddf854f7-wtpq9"] Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.313344 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74480c10-4eeb-4a66-99a0-82c49acf75d4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "74480c10-4eeb-4a66-99a0-82c49acf75d4" (UID: "74480c10-4eeb-4a66-99a0-82c49acf75d4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.316094 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74480c10-4eeb-4a66-99a0-82c49acf75d4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "74480c10-4eeb-4a66-99a0-82c49acf75d4" (UID: "74480c10-4eeb-4a66-99a0-82c49acf75d4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.320787 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29fb1b8f-bf8f-456d-8e56-8fded6d074a1-scripts\") pod \"keystone-75ddf854f7-wtpq9\" (UID: \"29fb1b8f-bf8f-456d-8e56-8fded6d074a1\") " pod="openstack/keystone-75ddf854f7-wtpq9" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.320824 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29fb1b8f-bf8f-456d-8e56-8fded6d074a1-config-data\") pod \"keystone-75ddf854f7-wtpq9\" (UID: \"29fb1b8f-bf8f-456d-8e56-8fded6d074a1\") " pod="openstack/keystone-75ddf854f7-wtpq9" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.320868 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/29fb1b8f-bf8f-456d-8e56-8fded6d074a1-credential-keys\") pod \"keystone-75ddf854f7-wtpq9\" (UID: \"29fb1b8f-bf8f-456d-8e56-8fded6d074a1\") " pod="openstack/keystone-75ddf854f7-wtpq9" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.320921 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/29fb1b8f-bf8f-456d-8e56-8fded6d074a1-public-tls-certs\") pod \"keystone-75ddf854f7-wtpq9\" (UID: \"29fb1b8f-bf8f-456d-8e56-8fded6d074a1\") " pod="openstack/keystone-75ddf854f7-wtpq9" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.320946 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29fb1b8f-bf8f-456d-8e56-8fded6d074a1-combined-ca-bundle\") pod \"keystone-75ddf854f7-wtpq9\" (UID: \"29fb1b8f-bf8f-456d-8e56-8fded6d074a1\") " pod="openstack/keystone-75ddf854f7-wtpq9" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.320997 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/29fb1b8f-bf8f-456d-8e56-8fded6d074a1-fernet-keys\") pod \"keystone-75ddf854f7-wtpq9\" (UID: \"29fb1b8f-bf8f-456d-8e56-8fded6d074a1\") " pod="openstack/keystone-75ddf854f7-wtpq9" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.321061 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76xcf\" (UniqueName: \"kubernetes.io/projected/29fb1b8f-bf8f-456d-8e56-8fded6d074a1-kube-api-access-76xcf\") pod \"keystone-75ddf854f7-wtpq9\" (UID: \"29fb1b8f-bf8f-456d-8e56-8fded6d074a1\") " pod="openstack/keystone-75ddf854f7-wtpq9" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.321154 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/29fb1b8f-bf8f-456d-8e56-8fded6d074a1-internal-tls-certs\") pod \"keystone-75ddf854f7-wtpq9\" (UID: \"29fb1b8f-bf8f-456d-8e56-8fded6d074a1\") " pod="openstack/keystone-75ddf854f7-wtpq9" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.321228 4619 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74480c10-4eeb-4a66-99a0-82c49acf75d4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.321245 4619 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/74480c10-4eeb-4a66-99a0-82c49acf75d4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.334412 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74480c10-4eeb-4a66-99a0-82c49acf75d4-config" (OuterVolumeSpecName: "config") pod "74480c10-4eeb-4a66-99a0-82c49acf75d4" (UID: "74480c10-4eeb-4a66-99a0-82c49acf75d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.335279 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74480c10-4eeb-4a66-99a0-82c49acf75d4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "74480c10-4eeb-4a66-99a0-82c49acf75d4" (UID: "74480c10-4eeb-4a66-99a0-82c49acf75d4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.336175 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74480c10-4eeb-4a66-99a0-82c49acf75d4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "74480c10-4eeb-4a66-99a0-82c49acf75d4" (UID: "74480c10-4eeb-4a66-99a0-82c49acf75d4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.422223 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/29fb1b8f-bf8f-456d-8e56-8fded6d074a1-credential-keys\") pod \"keystone-75ddf854f7-wtpq9\" (UID: \"29fb1b8f-bf8f-456d-8e56-8fded6d074a1\") " pod="openstack/keystone-75ddf854f7-wtpq9" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.422520 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/29fb1b8f-bf8f-456d-8e56-8fded6d074a1-public-tls-certs\") pod \"keystone-75ddf854f7-wtpq9\" (UID: \"29fb1b8f-bf8f-456d-8e56-8fded6d074a1\") " pod="openstack/keystone-75ddf854f7-wtpq9" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.422547 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29fb1b8f-bf8f-456d-8e56-8fded6d074a1-combined-ca-bundle\") pod \"keystone-75ddf854f7-wtpq9\" (UID: \"29fb1b8f-bf8f-456d-8e56-8fded6d074a1\") " pod="openstack/keystone-75ddf854f7-wtpq9" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.422570 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/29fb1b8f-bf8f-456d-8e56-8fded6d074a1-fernet-keys\") pod \"keystone-75ddf854f7-wtpq9\" (UID: \"29fb1b8f-bf8f-456d-8e56-8fded6d074a1\") " pod="openstack/keystone-75ddf854f7-wtpq9" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.422605 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76xcf\" (UniqueName: \"kubernetes.io/projected/29fb1b8f-bf8f-456d-8e56-8fded6d074a1-kube-api-access-76xcf\") pod \"keystone-75ddf854f7-wtpq9\" (UID: \"29fb1b8f-bf8f-456d-8e56-8fded6d074a1\") " pod="openstack/keystone-75ddf854f7-wtpq9" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.422672 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/29fb1b8f-bf8f-456d-8e56-8fded6d074a1-internal-tls-certs\") pod \"keystone-75ddf854f7-wtpq9\" (UID: \"29fb1b8f-bf8f-456d-8e56-8fded6d074a1\") " pod="openstack/keystone-75ddf854f7-wtpq9" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.422714 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29fb1b8f-bf8f-456d-8e56-8fded6d074a1-config-data\") pod \"keystone-75ddf854f7-wtpq9\" (UID: \"29fb1b8f-bf8f-456d-8e56-8fded6d074a1\") " pod="openstack/keystone-75ddf854f7-wtpq9" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.422731 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29fb1b8f-bf8f-456d-8e56-8fded6d074a1-scripts\") pod \"keystone-75ddf854f7-wtpq9\" (UID: \"29fb1b8f-bf8f-456d-8e56-8fded6d074a1\") " pod="openstack/keystone-75ddf854f7-wtpq9" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.422779 4619 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74480c10-4eeb-4a66-99a0-82c49acf75d4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.422792 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74480c10-4eeb-4a66-99a0-82c49acf75d4-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.422802 4619 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74480c10-4eeb-4a66-99a0-82c49acf75d4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.434529 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/29fb1b8f-bf8f-456d-8e56-8fded6d074a1-internal-tls-certs\") pod \"keystone-75ddf854f7-wtpq9\" (UID: \"29fb1b8f-bf8f-456d-8e56-8fded6d074a1\") " pod="openstack/keystone-75ddf854f7-wtpq9" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.434988 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/29fb1b8f-bf8f-456d-8e56-8fded6d074a1-public-tls-certs\") pod \"keystone-75ddf854f7-wtpq9\" (UID: \"29fb1b8f-bf8f-456d-8e56-8fded6d074a1\") " pod="openstack/keystone-75ddf854f7-wtpq9" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.435353 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29fb1b8f-bf8f-456d-8e56-8fded6d074a1-scripts\") pod \"keystone-75ddf854f7-wtpq9\" (UID: \"29fb1b8f-bf8f-456d-8e56-8fded6d074a1\") " pod="openstack/keystone-75ddf854f7-wtpq9" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.435961 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/29fb1b8f-bf8f-456d-8e56-8fded6d074a1-fernet-keys\") pod \"keystone-75ddf854f7-wtpq9\" (UID: \"29fb1b8f-bf8f-456d-8e56-8fded6d074a1\") " pod="openstack/keystone-75ddf854f7-wtpq9" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.437041 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/29fb1b8f-bf8f-456d-8e56-8fded6d074a1-credential-keys\") pod \"keystone-75ddf854f7-wtpq9\" (UID: \"29fb1b8f-bf8f-456d-8e56-8fded6d074a1\") " pod="openstack/keystone-75ddf854f7-wtpq9" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.437533 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29fb1b8f-bf8f-456d-8e56-8fded6d074a1-combined-ca-bundle\") pod \"keystone-75ddf854f7-wtpq9\" (UID: \"29fb1b8f-bf8f-456d-8e56-8fded6d074a1\") " pod="openstack/keystone-75ddf854f7-wtpq9" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.438145 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29fb1b8f-bf8f-456d-8e56-8fded6d074a1-config-data\") pod \"keystone-75ddf854f7-wtpq9\" (UID: \"29fb1b8f-bf8f-456d-8e56-8fded6d074a1\") " pod="openstack/keystone-75ddf854f7-wtpq9" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.459178 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76xcf\" (UniqueName: \"kubernetes.io/projected/29fb1b8f-bf8f-456d-8e56-8fded6d074a1-kube-api-access-76xcf\") pod \"keystone-75ddf854f7-wtpq9\" (UID: \"29fb1b8f-bf8f-456d-8e56-8fded6d074a1\") " pod="openstack/keystone-75ddf854f7-wtpq9" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.530466 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-75ddf854f7-wtpq9" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.972600 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-s54b4" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.972761 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-s54b4" event={"ID":"74480c10-4eeb-4a66-99a0-82c49acf75d4","Type":"ContainerDied","Data":"6fc1fc9e4b693f59d082d85e3c1299d631fda01ca435aa563ed769a704f9958a"} Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.973524 4619 scope.go:117] "RemoveContainer" containerID="81ff015cadea4ceebbc379201a125a8398879a2727da4f4bffcf76c804875e13" Jan 26 11:13:38 crc kubenswrapper[4619]: I0126 11:13:38.988470 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7zn9h" event={"ID":"42f56f30-76de-408b-bbe1-8ef2b764f26b","Type":"ContainerStarted","Data":"33cf5b414a5f2c4f75bea2cf04e5c3cc2aff136af62769a26d6566b5900430d1"} Jan 26 11:13:39 crc kubenswrapper[4619]: I0126 11:13:39.024185 4619 scope.go:117] "RemoveContainer" containerID="49dbb3362f95c5fc5ee1202bb056b6c1a772e2296b21592ad081154c3727bc32" Jan 26 11:13:39 crc kubenswrapper[4619]: I0126 11:13:39.046161 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-s54b4"] Jan 26 11:13:39 crc kubenswrapper[4619]: I0126 11:13:39.092517 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-s54b4"] Jan 26 11:13:39 crc kubenswrapper[4619]: I0126 11:13:39.121098 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-75ddf854f7-wtpq9"] Jan 26 11:13:39 crc kubenswrapper[4619]: I0126 11:13:39.137566 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-7zn9h" podStartSLOduration=5.060931449 podStartE2EDuration="55.137542001s" podCreationTimestamp="2026-01-26 11:12:44 +0000 UTC" firstStartedPulling="2026-01-26 11:12:46.976416693 +0000 UTC m=+1066.010457409" lastFinishedPulling="2026-01-26 11:13:37.053027245 +0000 UTC m=+1116.087067961" observedRunningTime="2026-01-26 11:13:39.058326647 +0000 UTC m=+1118.092367363" watchObservedRunningTime="2026-01-26 11:13:39.137542001 +0000 UTC m=+1118.171582717" Jan 26 11:13:39 crc kubenswrapper[4619]: I0126 11:13:39.279169 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74480c10-4eeb-4a66-99a0-82c49acf75d4" path="/var/lib/kubelet/pods/74480c10-4eeb-4a66-99a0-82c49acf75d4/volumes" Jan 26 11:13:39 crc kubenswrapper[4619]: I0126 11:13:39.281526 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 11:13:39 crc kubenswrapper[4619]: I0126 11:13:39.282355 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 11:13:39 crc kubenswrapper[4619]: I0126 11:13:39.312529 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 11:13:39 crc kubenswrapper[4619]: I0126 11:13:39.328944 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 11:13:40 crc kubenswrapper[4619]: I0126 11:13:40.003232 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-75ddf854f7-wtpq9" event={"ID":"29fb1b8f-bf8f-456d-8e56-8fded6d074a1","Type":"ContainerStarted","Data":"98c20c21568013eb9e24996d589c1bc047f70ff94d5b0798594e47c7d94d7d96"} Jan 26 11:13:40 crc kubenswrapper[4619]: I0126 11:13:40.003526 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 11:13:40 crc kubenswrapper[4619]: I0126 11:13:40.003640 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 11:13:41 crc kubenswrapper[4619]: I0126 11:13:41.012751 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-75ddf854f7-wtpq9" event={"ID":"29fb1b8f-bf8f-456d-8e56-8fded6d074a1","Type":"ContainerStarted","Data":"307fa4d78c3cb39612577d55a177d98f44f74774f94e5d2f8157d0d4328cb355"} Jan 26 11:13:41 crc kubenswrapper[4619]: I0126 11:13:41.037042 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-75ddf854f7-wtpq9" podStartSLOduration=3.037020621 podStartE2EDuration="3.037020621s" podCreationTimestamp="2026-01-26 11:13:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:13:41.03409091 +0000 UTC m=+1120.068131626" watchObservedRunningTime="2026-01-26 11:13:41.037020621 +0000 UTC m=+1120.071061337" Jan 26 11:13:42 crc kubenswrapper[4619]: I0126 11:13:42.021655 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-75ddf854f7-wtpq9" Jan 26 11:13:42 crc kubenswrapper[4619]: I0126 11:13:42.021719 4619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 11:13:42 crc kubenswrapper[4619]: I0126 11:13:42.021955 4619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 11:13:43 crc kubenswrapper[4619]: I0126 11:13:43.038549 4619 generic.go:334] "Generic (PLEG): container finished" podID="4fe185f3-c64d-47a7-9c93-f40ef8d24d9e" containerID="fb61a5a65be44d00f88b4c6db055e747ee591a7ea7e16082937f513414454156" exitCode=0 Jan 26 11:13:43 crc kubenswrapper[4619]: I0126 11:13:43.039353 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vmzjm" event={"ID":"4fe185f3-c64d-47a7-9c93-f40ef8d24d9e","Type":"ContainerDied","Data":"fb61a5a65be44d00f88b4c6db055e747ee591a7ea7e16082937f513414454156"} Jan 26 11:13:44 crc kubenswrapper[4619]: I0126 11:13:44.235030 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:13:44 crc kubenswrapper[4619]: I0126 11:13:44.236240 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:13:44 crc kubenswrapper[4619]: I0126 11:13:44.346934 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 11:13:44 crc kubenswrapper[4619]: I0126 11:13:44.347266 4619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 11:13:44 crc kubenswrapper[4619]: I0126 11:13:44.352382 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 11:13:45 crc kubenswrapper[4619]: I0126 11:13:45.115127 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6f67c775d4-7ls4r" podUID="670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 26 11:13:45 crc kubenswrapper[4619]: I0126 11:13:45.307723 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-846d64d6c4-66jvl" podUID="10c8ed10-dab5-49e5-a030-4be99c720ae0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 26 11:13:48 crc kubenswrapper[4619]: I0126 11:13:48.104314 4619 generic.go:334] "Generic (PLEG): container finished" podID="42f56f30-76de-408b-bbe1-8ef2b764f26b" containerID="33cf5b414a5f2c4f75bea2cf04e5c3cc2aff136af62769a26d6566b5900430d1" exitCode=0 Jan 26 11:13:48 crc kubenswrapper[4619]: I0126 11:13:48.104472 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7zn9h" event={"ID":"42f56f30-76de-408b-bbe1-8ef2b764f26b","Type":"ContainerDied","Data":"33cf5b414a5f2c4f75bea2cf04e5c3cc2aff136af62769a26d6566b5900430d1"} Jan 26 11:13:48 crc kubenswrapper[4619]: I0126 11:13:48.493674 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vmzjm" Jan 26 11:13:48 crc kubenswrapper[4619]: I0126 11:13:48.585225 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe185f3-c64d-47a7-9c93-f40ef8d24d9e-combined-ca-bundle\") pod \"4fe185f3-c64d-47a7-9c93-f40ef8d24d9e\" (UID: \"4fe185f3-c64d-47a7-9c93-f40ef8d24d9e\") " Jan 26 11:13:48 crc kubenswrapper[4619]: I0126 11:13:48.585330 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4fe185f3-c64d-47a7-9c93-f40ef8d24d9e-db-sync-config-data\") pod \"4fe185f3-c64d-47a7-9c93-f40ef8d24d9e\" (UID: \"4fe185f3-c64d-47a7-9c93-f40ef8d24d9e\") " Jan 26 11:13:48 crc kubenswrapper[4619]: I0126 11:13:48.585391 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bm4zc\" (UniqueName: \"kubernetes.io/projected/4fe185f3-c64d-47a7-9c93-f40ef8d24d9e-kube-api-access-bm4zc\") pod \"4fe185f3-c64d-47a7-9c93-f40ef8d24d9e\" (UID: \"4fe185f3-c64d-47a7-9c93-f40ef8d24d9e\") " Jan 26 11:13:48 crc kubenswrapper[4619]: I0126 11:13:48.606804 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fe185f3-c64d-47a7-9c93-f40ef8d24d9e-kube-api-access-bm4zc" (OuterVolumeSpecName: "kube-api-access-bm4zc") pod "4fe185f3-c64d-47a7-9c93-f40ef8d24d9e" (UID: "4fe185f3-c64d-47a7-9c93-f40ef8d24d9e"). InnerVolumeSpecName "kube-api-access-bm4zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:13:48 crc kubenswrapper[4619]: I0126 11:13:48.606908 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe185f3-c64d-47a7-9c93-f40ef8d24d9e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4fe185f3-c64d-47a7-9c93-f40ef8d24d9e" (UID: "4fe185f3-c64d-47a7-9c93-f40ef8d24d9e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:13:48 crc kubenswrapper[4619]: I0126 11:13:48.609988 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe185f3-c64d-47a7-9c93-f40ef8d24d9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4fe185f3-c64d-47a7-9c93-f40ef8d24d9e" (UID: "4fe185f3-c64d-47a7-9c93-f40ef8d24d9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:13:48 crc kubenswrapper[4619]: I0126 11:13:48.687265 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe185f3-c64d-47a7-9c93-f40ef8d24d9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:48 crc kubenswrapper[4619]: I0126 11:13:48.687519 4619 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4fe185f3-c64d-47a7-9c93-f40ef8d24d9e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:48 crc kubenswrapper[4619]: I0126 11:13:48.687529 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bm4zc\" (UniqueName: \"kubernetes.io/projected/4fe185f3-c64d-47a7-9c93-f40ef8d24d9e-kube-api-access-bm4zc\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:49 crc kubenswrapper[4619]: I0126 11:13:49.114303 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vmzjm" Jan 26 11:13:49 crc kubenswrapper[4619]: I0126 11:13:49.123164 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vmzjm" event={"ID":"4fe185f3-c64d-47a7-9c93-f40ef8d24d9e","Type":"ContainerDied","Data":"12ceab4058ee6945f0d671053098031e5003db2ee2f3493931679fa96f5e97e2"} Jan 26 11:13:49 crc kubenswrapper[4619]: I0126 11:13:49.123200 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12ceab4058ee6945f0d671053098031e5003db2ee2f3493931679fa96f5e97e2" Jan 26 11:13:49 crc kubenswrapper[4619]: I0126 11:13:49.782251 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5b76d57d79-c2tm5"] Jan 26 11:13:49 crc kubenswrapper[4619]: E0126 11:13:49.783040 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe185f3-c64d-47a7-9c93-f40ef8d24d9e" containerName="barbican-db-sync" Jan 26 11:13:49 crc kubenswrapper[4619]: I0126 11:13:49.783064 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe185f3-c64d-47a7-9c93-f40ef8d24d9e" containerName="barbican-db-sync" Jan 26 11:13:49 crc kubenswrapper[4619]: I0126 11:13:49.783292 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fe185f3-c64d-47a7-9c93-f40ef8d24d9e" containerName="barbican-db-sync" Jan 26 11:13:49 crc kubenswrapper[4619]: I0126 11:13:49.784485 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5b76d57d79-c2tm5" Jan 26 11:13:49 crc kubenswrapper[4619]: I0126 11:13:49.791359 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 26 11:13:49 crc kubenswrapper[4619]: I0126 11:13:49.791506 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 26 11:13:49 crc kubenswrapper[4619]: I0126 11:13:49.791516 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-jzqlk" Jan 26 11:13:49 crc kubenswrapper[4619]: I0126 11:13:49.835576 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5b76d57d79-c2tm5"] Jan 26 11:13:49 crc kubenswrapper[4619]: I0126 11:13:49.850673 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5cd59c79cd-lqtz6"] Jan 26 11:13:49 crc kubenswrapper[4619]: I0126 11:13:49.852359 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5cd59c79cd-lqtz6" Jan 26 11:13:49 crc kubenswrapper[4619]: I0126 11:13:49.856391 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5cd59c79cd-lqtz6"] Jan 26 11:13:49 crc kubenswrapper[4619]: I0126 11:13:49.869096 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 26 11:13:49 crc kubenswrapper[4619]: I0126 11:13:49.913780 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd4a2072-c71c-42f6-940e-35435fc350c7-config-data\") pod \"barbican-worker-5b76d57d79-c2tm5\" (UID: \"cd4a2072-c71c-42f6-940e-35435fc350c7\") " pod="openstack/barbican-worker-5b76d57d79-c2tm5" Jan 26 11:13:49 crc kubenswrapper[4619]: I0126 11:13:49.913870 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pglnk\" (UniqueName: \"kubernetes.io/projected/827c156d-633b-414a-93ef-07d73ba79785-kube-api-access-pglnk\") pod \"barbican-keystone-listener-5cd59c79cd-lqtz6\" (UID: \"827c156d-633b-414a-93ef-07d73ba79785\") " pod="openstack/barbican-keystone-listener-5cd59c79cd-lqtz6" Jan 26 11:13:49 crc kubenswrapper[4619]: I0126 11:13:49.913932 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd4a2072-c71c-42f6-940e-35435fc350c7-combined-ca-bundle\") pod \"barbican-worker-5b76d57d79-c2tm5\" (UID: \"cd4a2072-c71c-42f6-940e-35435fc350c7\") " pod="openstack/barbican-worker-5b76d57d79-c2tm5" Jan 26 11:13:49 crc kubenswrapper[4619]: I0126 11:13:49.913955 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd4a2072-c71c-42f6-940e-35435fc350c7-config-data-custom\") pod \"barbican-worker-5b76d57d79-c2tm5\" (UID: \"cd4a2072-c71c-42f6-940e-35435fc350c7\") " pod="openstack/barbican-worker-5b76d57d79-c2tm5" Jan 26 11:13:49 crc kubenswrapper[4619]: I0126 11:13:49.913981 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/827c156d-633b-414a-93ef-07d73ba79785-config-data\") pod \"barbican-keystone-listener-5cd59c79cd-lqtz6\" (UID: \"827c156d-633b-414a-93ef-07d73ba79785\") " pod="openstack/barbican-keystone-listener-5cd59c79cd-lqtz6" Jan 26 11:13:49 crc kubenswrapper[4619]: I0126 11:13:49.914013 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827c156d-633b-414a-93ef-07d73ba79785-combined-ca-bundle\") pod \"barbican-keystone-listener-5cd59c79cd-lqtz6\" (UID: \"827c156d-633b-414a-93ef-07d73ba79785\") " pod="openstack/barbican-keystone-listener-5cd59c79cd-lqtz6" Jan 26 11:13:49 crc kubenswrapper[4619]: I0126 11:13:49.914042 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd4a2072-c71c-42f6-940e-35435fc350c7-logs\") pod \"barbican-worker-5b76d57d79-c2tm5\" (UID: \"cd4a2072-c71c-42f6-940e-35435fc350c7\") " pod="openstack/barbican-worker-5b76d57d79-c2tm5" Jan 26 11:13:49 crc kubenswrapper[4619]: I0126 11:13:49.914107 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/827c156d-633b-414a-93ef-07d73ba79785-logs\") pod \"barbican-keystone-listener-5cd59c79cd-lqtz6\" (UID: \"827c156d-633b-414a-93ef-07d73ba79785\") " pod="openstack/barbican-keystone-listener-5cd59c79cd-lqtz6" Jan 26 11:13:49 crc kubenswrapper[4619]: I0126 11:13:49.914140 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/827c156d-633b-414a-93ef-07d73ba79785-config-data-custom\") pod \"barbican-keystone-listener-5cd59c79cd-lqtz6\" (UID: \"827c156d-633b-414a-93ef-07d73ba79785\") " pod="openstack/barbican-keystone-listener-5cd59c79cd-lqtz6" Jan 26 11:13:49 crc kubenswrapper[4619]: I0126 11:13:49.914176 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbh4p\" (UniqueName: \"kubernetes.io/projected/cd4a2072-c71c-42f6-940e-35435fc350c7-kube-api-access-hbh4p\") pod \"barbican-worker-5b76d57d79-c2tm5\" (UID: \"cd4a2072-c71c-42f6-940e-35435fc350c7\") " pod="openstack/barbican-worker-5b76d57d79-c2tm5" Jan 26 11:13:49 crc kubenswrapper[4619]: I0126 11:13:49.977543 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-lxhdc"] Jan 26 11:13:49 crc kubenswrapper[4619]: I0126 11:13:49.984947 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-lxhdc" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.017747 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-lxhdc"] Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.017983 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/827c156d-633b-414a-93ef-07d73ba79785-logs\") pod \"barbican-keystone-listener-5cd59c79cd-lqtz6\" (UID: \"827c156d-633b-414a-93ef-07d73ba79785\") " pod="openstack/barbican-keystone-listener-5cd59c79cd-lqtz6" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.018028 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/827c156d-633b-414a-93ef-07d73ba79785-config-data-custom\") pod \"barbican-keystone-listener-5cd59c79cd-lqtz6\" (UID: \"827c156d-633b-414a-93ef-07d73ba79785\") " pod="openstack/barbican-keystone-listener-5cd59c79cd-lqtz6" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.018057 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbh4p\" (UniqueName: \"kubernetes.io/projected/cd4a2072-c71c-42f6-940e-35435fc350c7-kube-api-access-hbh4p\") pod \"barbican-worker-5b76d57d79-c2tm5\" (UID: \"cd4a2072-c71c-42f6-940e-35435fc350c7\") " pod="openstack/barbican-worker-5b76d57d79-c2tm5" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.018103 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd4a2072-c71c-42f6-940e-35435fc350c7-config-data\") pod \"barbican-worker-5b76d57d79-c2tm5\" (UID: \"cd4a2072-c71c-42f6-940e-35435fc350c7\") " pod="openstack/barbican-worker-5b76d57d79-c2tm5" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.018130 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pglnk\" (UniqueName: \"kubernetes.io/projected/827c156d-633b-414a-93ef-07d73ba79785-kube-api-access-pglnk\") pod \"barbican-keystone-listener-5cd59c79cd-lqtz6\" (UID: \"827c156d-633b-414a-93ef-07d73ba79785\") " pod="openstack/barbican-keystone-listener-5cd59c79cd-lqtz6" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.018167 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd4a2072-c71c-42f6-940e-35435fc350c7-combined-ca-bundle\") pod \"barbican-worker-5b76d57d79-c2tm5\" (UID: \"cd4a2072-c71c-42f6-940e-35435fc350c7\") " pod="openstack/barbican-worker-5b76d57d79-c2tm5" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.018185 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd4a2072-c71c-42f6-940e-35435fc350c7-config-data-custom\") pod \"barbican-worker-5b76d57d79-c2tm5\" (UID: \"cd4a2072-c71c-42f6-940e-35435fc350c7\") " pod="openstack/barbican-worker-5b76d57d79-c2tm5" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.018202 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/827c156d-633b-414a-93ef-07d73ba79785-config-data\") pod \"barbican-keystone-listener-5cd59c79cd-lqtz6\" (UID: \"827c156d-633b-414a-93ef-07d73ba79785\") " pod="openstack/barbican-keystone-listener-5cd59c79cd-lqtz6" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.018221 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827c156d-633b-414a-93ef-07d73ba79785-combined-ca-bundle\") pod \"barbican-keystone-listener-5cd59c79cd-lqtz6\" (UID: \"827c156d-633b-414a-93ef-07d73ba79785\") " pod="openstack/barbican-keystone-listener-5cd59c79cd-lqtz6" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.018244 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd4a2072-c71c-42f6-940e-35435fc350c7-logs\") pod \"barbican-worker-5b76d57d79-c2tm5\" (UID: \"cd4a2072-c71c-42f6-940e-35435fc350c7\") " pod="openstack/barbican-worker-5b76d57d79-c2tm5" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.018699 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd4a2072-c71c-42f6-940e-35435fc350c7-logs\") pod \"barbican-worker-5b76d57d79-c2tm5\" (UID: \"cd4a2072-c71c-42f6-940e-35435fc350c7\") " pod="openstack/barbican-worker-5b76d57d79-c2tm5" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.018932 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/827c156d-633b-414a-93ef-07d73ba79785-logs\") pod \"barbican-keystone-listener-5cd59c79cd-lqtz6\" (UID: \"827c156d-633b-414a-93ef-07d73ba79785\") " pod="openstack/barbican-keystone-listener-5cd59c79cd-lqtz6" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.030318 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd4a2072-c71c-42f6-940e-35435fc350c7-combined-ca-bundle\") pod \"barbican-worker-5b76d57d79-c2tm5\" (UID: \"cd4a2072-c71c-42f6-940e-35435fc350c7\") " pod="openstack/barbican-worker-5b76d57d79-c2tm5" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.031779 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/827c156d-633b-414a-93ef-07d73ba79785-config-data\") pod \"barbican-keystone-listener-5cd59c79cd-lqtz6\" (UID: \"827c156d-633b-414a-93ef-07d73ba79785\") " pod="openstack/barbican-keystone-listener-5cd59c79cd-lqtz6" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.033455 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd4a2072-c71c-42f6-940e-35435fc350c7-config-data\") pod \"barbican-worker-5b76d57d79-c2tm5\" (UID: \"cd4a2072-c71c-42f6-940e-35435fc350c7\") " pod="openstack/barbican-worker-5b76d57d79-c2tm5" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.035762 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827c156d-633b-414a-93ef-07d73ba79785-combined-ca-bundle\") pod \"barbican-keystone-listener-5cd59c79cd-lqtz6\" (UID: \"827c156d-633b-414a-93ef-07d73ba79785\") " pod="openstack/barbican-keystone-listener-5cd59c79cd-lqtz6" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.050606 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd4a2072-c71c-42f6-940e-35435fc350c7-config-data-custom\") pod \"barbican-worker-5b76d57d79-c2tm5\" (UID: \"cd4a2072-c71c-42f6-940e-35435fc350c7\") " pod="openstack/barbican-worker-5b76d57d79-c2tm5" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.055267 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/827c156d-633b-414a-93ef-07d73ba79785-config-data-custom\") pod \"barbican-keystone-listener-5cd59c79cd-lqtz6\" (UID: \"827c156d-633b-414a-93ef-07d73ba79785\") " pod="openstack/barbican-keystone-listener-5cd59c79cd-lqtz6" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.070750 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbh4p\" (UniqueName: \"kubernetes.io/projected/cd4a2072-c71c-42f6-940e-35435fc350c7-kube-api-access-hbh4p\") pod \"barbican-worker-5b76d57d79-c2tm5\" (UID: \"cd4a2072-c71c-42f6-940e-35435fc350c7\") " pod="openstack/barbican-worker-5b76d57d79-c2tm5" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.075956 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pglnk\" (UniqueName: \"kubernetes.io/projected/827c156d-633b-414a-93ef-07d73ba79785-kube-api-access-pglnk\") pod \"barbican-keystone-listener-5cd59c79cd-lqtz6\" (UID: \"827c156d-633b-414a-93ef-07d73ba79785\") " pod="openstack/barbican-keystone-listener-5cd59c79cd-lqtz6" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.112535 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5b76d57d79-c2tm5" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.121023 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwjkq\" (UniqueName: \"kubernetes.io/projected/c9fab432-c15e-42ed-8e2f-0593b72be3b8-kube-api-access-hwjkq\") pod \"dnsmasq-dns-85ff748b95-lxhdc\" (UID: \"c9fab432-c15e-42ed-8e2f-0593b72be3b8\") " pod="openstack/dnsmasq-dns-85ff748b95-lxhdc" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.121077 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9fab432-c15e-42ed-8e2f-0593b72be3b8-config\") pod \"dnsmasq-dns-85ff748b95-lxhdc\" (UID: \"c9fab432-c15e-42ed-8e2f-0593b72be3b8\") " pod="openstack/dnsmasq-dns-85ff748b95-lxhdc" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.121127 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c9fab432-c15e-42ed-8e2f-0593b72be3b8-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-lxhdc\" (UID: \"c9fab432-c15e-42ed-8e2f-0593b72be3b8\") " pod="openstack/dnsmasq-dns-85ff748b95-lxhdc" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.121156 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9fab432-c15e-42ed-8e2f-0593b72be3b8-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-lxhdc\" (UID: \"c9fab432-c15e-42ed-8e2f-0593b72be3b8\") " pod="openstack/dnsmasq-dns-85ff748b95-lxhdc" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.121183 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9fab432-c15e-42ed-8e2f-0593b72be3b8-dns-svc\") pod \"dnsmasq-dns-85ff748b95-lxhdc\" (UID: \"c9fab432-c15e-42ed-8e2f-0593b72be3b8\") " pod="openstack/dnsmasq-dns-85ff748b95-lxhdc" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.121205 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9fab432-c15e-42ed-8e2f-0593b72be3b8-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-lxhdc\" (UID: \"c9fab432-c15e-42ed-8e2f-0593b72be3b8\") " pod="openstack/dnsmasq-dns-85ff748b95-lxhdc" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.188286 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5cd59c79cd-lqtz6" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.222701 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9fab432-c15e-42ed-8e2f-0593b72be3b8-config\") pod \"dnsmasq-dns-85ff748b95-lxhdc\" (UID: \"c9fab432-c15e-42ed-8e2f-0593b72be3b8\") " pod="openstack/dnsmasq-dns-85ff748b95-lxhdc" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.222775 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c9fab432-c15e-42ed-8e2f-0593b72be3b8-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-lxhdc\" (UID: \"c9fab432-c15e-42ed-8e2f-0593b72be3b8\") " pod="openstack/dnsmasq-dns-85ff748b95-lxhdc" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.222802 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9fab432-c15e-42ed-8e2f-0593b72be3b8-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-lxhdc\" (UID: \"c9fab432-c15e-42ed-8e2f-0593b72be3b8\") " pod="openstack/dnsmasq-dns-85ff748b95-lxhdc" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.222823 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9fab432-c15e-42ed-8e2f-0593b72be3b8-dns-svc\") pod \"dnsmasq-dns-85ff748b95-lxhdc\" (UID: \"c9fab432-c15e-42ed-8e2f-0593b72be3b8\") " pod="openstack/dnsmasq-dns-85ff748b95-lxhdc" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.222841 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9fab432-c15e-42ed-8e2f-0593b72be3b8-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-lxhdc\" (UID: \"c9fab432-c15e-42ed-8e2f-0593b72be3b8\") " pod="openstack/dnsmasq-dns-85ff748b95-lxhdc" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.222914 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwjkq\" (UniqueName: \"kubernetes.io/projected/c9fab432-c15e-42ed-8e2f-0593b72be3b8-kube-api-access-hwjkq\") pod \"dnsmasq-dns-85ff748b95-lxhdc\" (UID: \"c9fab432-c15e-42ed-8e2f-0593b72be3b8\") " pod="openstack/dnsmasq-dns-85ff748b95-lxhdc" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.224000 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9fab432-c15e-42ed-8e2f-0593b72be3b8-config\") pod \"dnsmasq-dns-85ff748b95-lxhdc\" (UID: \"c9fab432-c15e-42ed-8e2f-0593b72be3b8\") " pod="openstack/dnsmasq-dns-85ff748b95-lxhdc" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.224494 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c9fab432-c15e-42ed-8e2f-0593b72be3b8-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-lxhdc\" (UID: \"c9fab432-c15e-42ed-8e2f-0593b72be3b8\") " pod="openstack/dnsmasq-dns-85ff748b95-lxhdc" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.224994 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9fab432-c15e-42ed-8e2f-0593b72be3b8-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-lxhdc\" (UID: \"c9fab432-c15e-42ed-8e2f-0593b72be3b8\") " pod="openstack/dnsmasq-dns-85ff748b95-lxhdc" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.225474 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9fab432-c15e-42ed-8e2f-0593b72be3b8-dns-svc\") pod \"dnsmasq-dns-85ff748b95-lxhdc\" (UID: \"c9fab432-c15e-42ed-8e2f-0593b72be3b8\") " pod="openstack/dnsmasq-dns-85ff748b95-lxhdc" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.226055 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9fab432-c15e-42ed-8e2f-0593b72be3b8-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-lxhdc\" (UID: \"c9fab432-c15e-42ed-8e2f-0593b72be3b8\") " pod="openstack/dnsmasq-dns-85ff748b95-lxhdc" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.263167 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwjkq\" (UniqueName: \"kubernetes.io/projected/c9fab432-c15e-42ed-8e2f-0593b72be3b8-kube-api-access-hwjkq\") pod \"dnsmasq-dns-85ff748b95-lxhdc\" (UID: \"c9fab432-c15e-42ed-8e2f-0593b72be3b8\") " pod="openstack/dnsmasq-dns-85ff748b95-lxhdc" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.280437 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-b7745f6db-mwnfn"] Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.282179 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b7745f6db-mwnfn" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.293001 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.310312 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-b7745f6db-mwnfn"] Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.430211 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f6c5e4d-17c2-45e9-add1-0b35026ba69d-config-data-custom\") pod \"barbican-api-b7745f6db-mwnfn\" (UID: \"6f6c5e4d-17c2-45e9-add1-0b35026ba69d\") " pod="openstack/barbican-api-b7745f6db-mwnfn" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.430531 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46fk4\" (UniqueName: \"kubernetes.io/projected/6f6c5e4d-17c2-45e9-add1-0b35026ba69d-kube-api-access-46fk4\") pod \"barbican-api-b7745f6db-mwnfn\" (UID: \"6f6c5e4d-17c2-45e9-add1-0b35026ba69d\") " pod="openstack/barbican-api-b7745f6db-mwnfn" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.430676 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f6c5e4d-17c2-45e9-add1-0b35026ba69d-logs\") pod \"barbican-api-b7745f6db-mwnfn\" (UID: \"6f6c5e4d-17c2-45e9-add1-0b35026ba69d\") " pod="openstack/barbican-api-b7745f6db-mwnfn" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.430770 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f6c5e4d-17c2-45e9-add1-0b35026ba69d-config-data\") pod \"barbican-api-b7745f6db-mwnfn\" (UID: \"6f6c5e4d-17c2-45e9-add1-0b35026ba69d\") " pod="openstack/barbican-api-b7745f6db-mwnfn" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.430823 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6c5e4d-17c2-45e9-add1-0b35026ba69d-combined-ca-bundle\") pod \"barbican-api-b7745f6db-mwnfn\" (UID: \"6f6c5e4d-17c2-45e9-add1-0b35026ba69d\") " pod="openstack/barbican-api-b7745f6db-mwnfn" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.463414 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-lxhdc" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.532940 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f6c5e4d-17c2-45e9-add1-0b35026ba69d-config-data\") pod \"barbican-api-b7745f6db-mwnfn\" (UID: \"6f6c5e4d-17c2-45e9-add1-0b35026ba69d\") " pod="openstack/barbican-api-b7745f6db-mwnfn" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.532989 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6c5e4d-17c2-45e9-add1-0b35026ba69d-combined-ca-bundle\") pod \"barbican-api-b7745f6db-mwnfn\" (UID: \"6f6c5e4d-17c2-45e9-add1-0b35026ba69d\") " pod="openstack/barbican-api-b7745f6db-mwnfn" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.533061 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f6c5e4d-17c2-45e9-add1-0b35026ba69d-config-data-custom\") pod \"barbican-api-b7745f6db-mwnfn\" (UID: \"6f6c5e4d-17c2-45e9-add1-0b35026ba69d\") " pod="openstack/barbican-api-b7745f6db-mwnfn" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.533123 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46fk4\" (UniqueName: \"kubernetes.io/projected/6f6c5e4d-17c2-45e9-add1-0b35026ba69d-kube-api-access-46fk4\") pod \"barbican-api-b7745f6db-mwnfn\" (UID: \"6f6c5e4d-17c2-45e9-add1-0b35026ba69d\") " pod="openstack/barbican-api-b7745f6db-mwnfn" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.533180 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f6c5e4d-17c2-45e9-add1-0b35026ba69d-logs\") pod \"barbican-api-b7745f6db-mwnfn\" (UID: \"6f6c5e4d-17c2-45e9-add1-0b35026ba69d\") " pod="openstack/barbican-api-b7745f6db-mwnfn" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.533632 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f6c5e4d-17c2-45e9-add1-0b35026ba69d-logs\") pod \"barbican-api-b7745f6db-mwnfn\" (UID: \"6f6c5e4d-17c2-45e9-add1-0b35026ba69d\") " pod="openstack/barbican-api-b7745f6db-mwnfn" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.538181 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f6c5e4d-17c2-45e9-add1-0b35026ba69d-config-data\") pod \"barbican-api-b7745f6db-mwnfn\" (UID: \"6f6c5e4d-17c2-45e9-add1-0b35026ba69d\") " pod="openstack/barbican-api-b7745f6db-mwnfn" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.539764 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f6c5e4d-17c2-45e9-add1-0b35026ba69d-config-data-custom\") pod \"barbican-api-b7745f6db-mwnfn\" (UID: \"6f6c5e4d-17c2-45e9-add1-0b35026ba69d\") " pod="openstack/barbican-api-b7745f6db-mwnfn" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.539779 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6c5e4d-17c2-45e9-add1-0b35026ba69d-combined-ca-bundle\") pod \"barbican-api-b7745f6db-mwnfn\" (UID: \"6f6c5e4d-17c2-45e9-add1-0b35026ba69d\") " pod="openstack/barbican-api-b7745f6db-mwnfn" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.553502 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46fk4\" (UniqueName: \"kubernetes.io/projected/6f6c5e4d-17c2-45e9-add1-0b35026ba69d-kube-api-access-46fk4\") pod \"barbican-api-b7745f6db-mwnfn\" (UID: \"6f6c5e4d-17c2-45e9-add1-0b35026ba69d\") " pod="openstack/barbican-api-b7745f6db-mwnfn" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.624466 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b7745f6db-mwnfn" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.682753 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7zn9h" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.837369 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42f56f30-76de-408b-bbe1-8ef2b764f26b-config-data\") pod \"42f56f30-76de-408b-bbe1-8ef2b764f26b\" (UID: \"42f56f30-76de-408b-bbe1-8ef2b764f26b\") " Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.837460 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f56f30-76de-408b-bbe1-8ef2b764f26b-combined-ca-bundle\") pod \"42f56f30-76de-408b-bbe1-8ef2b764f26b\" (UID: \"42f56f30-76de-408b-bbe1-8ef2b764f26b\") " Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.837551 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42f56f30-76de-408b-bbe1-8ef2b764f26b-scripts\") pod \"42f56f30-76de-408b-bbe1-8ef2b764f26b\" (UID: \"42f56f30-76de-408b-bbe1-8ef2b764f26b\") " Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.837640 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/42f56f30-76de-408b-bbe1-8ef2b764f26b-db-sync-config-data\") pod \"42f56f30-76de-408b-bbe1-8ef2b764f26b\" (UID: \"42f56f30-76de-408b-bbe1-8ef2b764f26b\") " Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.837723 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42f56f30-76de-408b-bbe1-8ef2b764f26b-etc-machine-id\") pod \"42f56f30-76de-408b-bbe1-8ef2b764f26b\" (UID: \"42f56f30-76de-408b-bbe1-8ef2b764f26b\") " Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.837765 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqt8q\" (UniqueName: \"kubernetes.io/projected/42f56f30-76de-408b-bbe1-8ef2b764f26b-kube-api-access-sqt8q\") pod \"42f56f30-76de-408b-bbe1-8ef2b764f26b\" (UID: \"42f56f30-76de-408b-bbe1-8ef2b764f26b\") " Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.841686 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42f56f30-76de-408b-bbe1-8ef2b764f26b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "42f56f30-76de-408b-bbe1-8ef2b764f26b" (UID: "42f56f30-76de-408b-bbe1-8ef2b764f26b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.856809 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42f56f30-76de-408b-bbe1-8ef2b764f26b-kube-api-access-sqt8q" (OuterVolumeSpecName: "kube-api-access-sqt8q") pod "42f56f30-76de-408b-bbe1-8ef2b764f26b" (UID: "42f56f30-76de-408b-bbe1-8ef2b764f26b"). InnerVolumeSpecName "kube-api-access-sqt8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.858936 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42f56f30-76de-408b-bbe1-8ef2b764f26b-scripts" (OuterVolumeSpecName: "scripts") pod "42f56f30-76de-408b-bbe1-8ef2b764f26b" (UID: "42f56f30-76de-408b-bbe1-8ef2b764f26b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.859984 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42f56f30-76de-408b-bbe1-8ef2b764f26b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "42f56f30-76de-408b-bbe1-8ef2b764f26b" (UID: "42f56f30-76de-408b-bbe1-8ef2b764f26b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.939711 4619 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42f56f30-76de-408b-bbe1-8ef2b764f26b-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.940006 4619 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/42f56f30-76de-408b-bbe1-8ef2b764f26b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.940016 4619 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42f56f30-76de-408b-bbe1-8ef2b764f26b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.940024 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqt8q\" (UniqueName: \"kubernetes.io/projected/42f56f30-76de-408b-bbe1-8ef2b764f26b-kube-api-access-sqt8q\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.944774 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42f56f30-76de-408b-bbe1-8ef2b764f26b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42f56f30-76de-408b-bbe1-8ef2b764f26b" (UID: "42f56f30-76de-408b-bbe1-8ef2b764f26b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:13:50 crc kubenswrapper[4619]: I0126 11:13:50.992350 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42f56f30-76de-408b-bbe1-8ef2b764f26b-config-data" (OuterVolumeSpecName: "config-data") pod "42f56f30-76de-408b-bbe1-8ef2b764f26b" (UID: "42f56f30-76de-408b-bbe1-8ef2b764f26b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:13:51 crc kubenswrapper[4619]: I0126 11:13:51.041230 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42f56f30-76de-408b-bbe1-8ef2b764f26b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:51 crc kubenswrapper[4619]: I0126 11:13:51.041256 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f56f30-76de-408b-bbe1-8ef2b764f26b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:51 crc kubenswrapper[4619]: I0126 11:13:51.213418 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7zn9h" event={"ID":"42f56f30-76de-408b-bbe1-8ef2b764f26b","Type":"ContainerDied","Data":"638a4d1250dddf57a8e42bb54ccd02cc3175cfc17ad087d2bab6e0bbb65f67d2"} Jan 26 11:13:51 crc kubenswrapper[4619]: I0126 11:13:51.213805 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="638a4d1250dddf57a8e42bb54ccd02cc3175cfc17ad087d2bab6e0bbb65f67d2" Jan 26 11:13:51 crc kubenswrapper[4619]: I0126 11:13:51.213666 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7zn9h" Jan 26 11:13:51 crc kubenswrapper[4619]: I0126 11:13:51.357780 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-lxhdc"] Jan 26 11:13:51 crc kubenswrapper[4619]: E0126 11:13:51.459903 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="a59562e9-8459-4c22-a737-f6bde480fc2b" Jan 26 11:13:51 crc kubenswrapper[4619]: I0126 11:13:51.668244 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5b76d57d79-c2tm5"] Jan 26 11:13:51 crc kubenswrapper[4619]: W0126 11:13:51.712902 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd4a2072_c71c_42f6_940e_35435fc350c7.slice/crio-0d2fa99b393cc60b09a6c1490a896b8f430a8bf8220c46be5d4345bf78099bca WatchSource:0}: Error finding container 0d2fa99b393cc60b09a6c1490a896b8f430a8bf8220c46be5d4345bf78099bca: Status 404 returned error can't find the container with id 0d2fa99b393cc60b09a6c1490a896b8f430a8bf8220c46be5d4345bf78099bca Jan 26 11:13:51 crc kubenswrapper[4619]: I0126 11:13:51.757923 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5cd59c79cd-lqtz6"] Jan 26 11:13:51 crc kubenswrapper[4619]: W0126 11:13:51.797299 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f6c5e4d_17c2_45e9_add1_0b35026ba69d.slice/crio-8d0d0597138fd708bab64a049e6efdf65481b105eafe70c80d7017f6e8556b48 WatchSource:0}: Error finding container 8d0d0597138fd708bab64a049e6efdf65481b105eafe70c80d7017f6e8556b48: Status 404 returned error can't find the container with id 8d0d0597138fd708bab64a049e6efdf65481b105eafe70c80d7017f6e8556b48 Jan 26 11:13:51 crc kubenswrapper[4619]: I0126 11:13:51.798764 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-b7745f6db-mwnfn"] Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.059207 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-lxhdc"] Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.080746 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 11:13:52 crc kubenswrapper[4619]: E0126 11:13:52.081128 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42f56f30-76de-408b-bbe1-8ef2b764f26b" containerName="cinder-db-sync" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.081145 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f56f30-76de-408b-bbe1-8ef2b764f26b" containerName="cinder-db-sync" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.081321 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="42f56f30-76de-408b-bbe1-8ef2b764f26b" containerName="cinder-db-sync" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.082327 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.093937 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.094107 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.095325 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.095588 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-bthxz" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.121883 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.173590 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdqmc\" (UniqueName: \"kubernetes.io/projected/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-kube-api-access-hdqmc\") pod \"cinder-scheduler-0\" (UID: \"1f4725e4-fd9d-49c8-b4a4-04d9f855f285\") " pod="openstack/cinder-scheduler-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.173657 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1f4725e4-fd9d-49c8-b4a4-04d9f855f285\") " pod="openstack/cinder-scheduler-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.173727 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-scripts\") pod \"cinder-scheduler-0\" (UID: \"1f4725e4-fd9d-49c8-b4a4-04d9f855f285\") " pod="openstack/cinder-scheduler-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.173746 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-config-data\") pod \"cinder-scheduler-0\" (UID: \"1f4725e4-fd9d-49c8-b4a4-04d9f855f285\") " pod="openstack/cinder-scheduler-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.173770 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1f4725e4-fd9d-49c8-b4a4-04d9f855f285\") " pod="openstack/cinder-scheduler-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.173795 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1f4725e4-fd9d-49c8-b4a4-04d9f855f285\") " pod="openstack/cinder-scheduler-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.228250 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b76d57d79-c2tm5" event={"ID":"cd4a2072-c71c-42f6-940e-35435fc350c7","Type":"ContainerStarted","Data":"0d2fa99b393cc60b09a6c1490a896b8f430a8bf8220c46be5d4345bf78099bca"} Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.244827 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a59562e9-8459-4c22-a737-f6bde480fc2b","Type":"ContainerStarted","Data":"568063328cf375c869cbcd17174ade10f48edb94016e35549e0fe88d800a29d9"} Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.245134 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a59562e9-8459-4c22-a737-f6bde480fc2b" containerName="ceilometer-notification-agent" containerID="cri-o://7e3704a57885271c7faeef3d26990e8c17acae48718889e993d22d753a805304" gracePeriod=30 Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.245410 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.245739 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a59562e9-8459-4c22-a737-f6bde480fc2b" containerName="proxy-httpd" containerID="cri-o://568063328cf375c869cbcd17174ade10f48edb94016e35549e0fe88d800a29d9" gracePeriod=30 Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.245846 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a59562e9-8459-4c22-a737-f6bde480fc2b" containerName="sg-core" containerID="cri-o://79cb191465b8a7bf9e0cfebec9c66123e90081bcbde3d3834fed2928b741e6a6" gracePeriod=30 Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.248509 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5cd59c79cd-lqtz6" event={"ID":"827c156d-633b-414a-93ef-07d73ba79785","Type":"ContainerStarted","Data":"a82a38710e763b0fe61f8d55b3afd51c61dd35308bf56a5a3296ab9c5974613f"} Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.250012 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b7745f6db-mwnfn" event={"ID":"6f6c5e4d-17c2-45e9-add1-0b35026ba69d","Type":"ContainerStarted","Data":"8d0d0597138fd708bab64a049e6efdf65481b105eafe70c80d7017f6e8556b48"} Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.253938 4619 generic.go:334] "Generic (PLEG): container finished" podID="c9fab432-c15e-42ed-8e2f-0593b72be3b8" containerID="e3a4592fe916b0bfdd1ac1c8348e991b49588d2a58b0c1f74d5534eb42dc40a0" exitCode=0 Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.253992 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-lxhdc" event={"ID":"c9fab432-c15e-42ed-8e2f-0593b72be3b8","Type":"ContainerDied","Data":"e3a4592fe916b0bfdd1ac1c8348e991b49588d2a58b0c1f74d5534eb42dc40a0"} Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.254026 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-lxhdc" event={"ID":"c9fab432-c15e-42ed-8e2f-0593b72be3b8","Type":"ContainerStarted","Data":"c40cfb03b2db67b100692f1f9f1bd02c4860355e3eae296f97007b05dc070d89"} Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.275722 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdqmc\" (UniqueName: \"kubernetes.io/projected/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-kube-api-access-hdqmc\") pod \"cinder-scheduler-0\" (UID: \"1f4725e4-fd9d-49c8-b4a4-04d9f855f285\") " pod="openstack/cinder-scheduler-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.275791 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1f4725e4-fd9d-49c8-b4a4-04d9f855f285\") " pod="openstack/cinder-scheduler-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.275865 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-scripts\") pod \"cinder-scheduler-0\" (UID: \"1f4725e4-fd9d-49c8-b4a4-04d9f855f285\") " pod="openstack/cinder-scheduler-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.275885 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-config-data\") pod \"cinder-scheduler-0\" (UID: \"1f4725e4-fd9d-49c8-b4a4-04d9f855f285\") " pod="openstack/cinder-scheduler-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.275913 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1f4725e4-fd9d-49c8-b4a4-04d9f855f285\") " pod="openstack/cinder-scheduler-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.275936 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1f4725e4-fd9d-49c8-b4a4-04d9f855f285\") " pod="openstack/cinder-scheduler-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.277579 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-wkmcr"] Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.279113 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-wkmcr"] Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.279196 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-wkmcr" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.283441 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1f4725e4-fd9d-49c8-b4a4-04d9f855f285\") " pod="openstack/cinder-scheduler-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.283527 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1f4725e4-fd9d-49c8-b4a4-04d9f855f285\") " pod="openstack/cinder-scheduler-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.287006 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-scripts\") pod \"cinder-scheduler-0\" (UID: \"1f4725e4-fd9d-49c8-b4a4-04d9f855f285\") " pod="openstack/cinder-scheduler-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.325723 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1f4725e4-fd9d-49c8-b4a4-04d9f855f285\") " pod="openstack/cinder-scheduler-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.327428 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdqmc\" (UniqueName: \"kubernetes.io/projected/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-kube-api-access-hdqmc\") pod \"cinder-scheduler-0\" (UID: \"1f4725e4-fd9d-49c8-b4a4-04d9f855f285\") " pod="openstack/cinder-scheduler-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.346924 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-config-data\") pod \"cinder-scheduler-0\" (UID: \"1f4725e4-fd9d-49c8-b4a4-04d9f855f285\") " pod="openstack/cinder-scheduler-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.377922 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dae431ee-6510-4c51-b099-96092c0b5b6f-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-wkmcr\" (UID: \"dae431ee-6510-4c51-b099-96092c0b5b6f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkmcr" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.378008 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dae431ee-6510-4c51-b099-96092c0b5b6f-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-wkmcr\" (UID: \"dae431ee-6510-4c51-b099-96092c0b5b6f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkmcr" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.378035 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhp6t\" (UniqueName: \"kubernetes.io/projected/dae431ee-6510-4c51-b099-96092c0b5b6f-kube-api-access-xhp6t\") pod \"dnsmasq-dns-5c9776ccc5-wkmcr\" (UID: \"dae431ee-6510-4c51-b099-96092c0b5b6f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkmcr" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.378059 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dae431ee-6510-4c51-b099-96092c0b5b6f-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-wkmcr\" (UID: \"dae431ee-6510-4c51-b099-96092c0b5b6f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkmcr" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.378107 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dae431ee-6510-4c51-b099-96092c0b5b6f-config\") pod \"dnsmasq-dns-5c9776ccc5-wkmcr\" (UID: \"dae431ee-6510-4c51-b099-96092c0b5b6f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkmcr" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.378224 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dae431ee-6510-4c51-b099-96092c0b5b6f-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-wkmcr\" (UID: \"dae431ee-6510-4c51-b099-96092c0b5b6f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkmcr" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.413046 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.481580 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dae431ee-6510-4c51-b099-96092c0b5b6f-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-wkmcr\" (UID: \"dae431ee-6510-4c51-b099-96092c0b5b6f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkmcr" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.481715 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dae431ee-6510-4c51-b099-96092c0b5b6f-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-wkmcr\" (UID: \"dae431ee-6510-4c51-b099-96092c0b5b6f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkmcr" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.481751 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dae431ee-6510-4c51-b099-96092c0b5b6f-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-wkmcr\" (UID: \"dae431ee-6510-4c51-b099-96092c0b5b6f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkmcr" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.481775 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhp6t\" (UniqueName: \"kubernetes.io/projected/dae431ee-6510-4c51-b099-96092c0b5b6f-kube-api-access-xhp6t\") pod \"dnsmasq-dns-5c9776ccc5-wkmcr\" (UID: \"dae431ee-6510-4c51-b099-96092c0b5b6f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkmcr" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.481799 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dae431ee-6510-4c51-b099-96092c0b5b6f-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-wkmcr\" (UID: \"dae431ee-6510-4c51-b099-96092c0b5b6f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkmcr" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.481832 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dae431ee-6510-4c51-b099-96092c0b5b6f-config\") pod \"dnsmasq-dns-5c9776ccc5-wkmcr\" (UID: \"dae431ee-6510-4c51-b099-96092c0b5b6f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkmcr" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.482692 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dae431ee-6510-4c51-b099-96092c0b5b6f-config\") pod \"dnsmasq-dns-5c9776ccc5-wkmcr\" (UID: \"dae431ee-6510-4c51-b099-96092c0b5b6f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkmcr" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.483238 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dae431ee-6510-4c51-b099-96092c0b5b6f-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-wkmcr\" (UID: \"dae431ee-6510-4c51-b099-96092c0b5b6f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkmcr" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.483799 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dae431ee-6510-4c51-b099-96092c0b5b6f-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-wkmcr\" (UID: \"dae431ee-6510-4c51-b099-96092c0b5b6f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkmcr" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.484008 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dae431ee-6510-4c51-b099-96092c0b5b6f-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-wkmcr\" (UID: \"dae431ee-6510-4c51-b099-96092c0b5b6f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkmcr" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.497454 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dae431ee-6510-4c51-b099-96092c0b5b6f-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-wkmcr\" (UID: \"dae431ee-6510-4c51-b099-96092c0b5b6f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkmcr" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.511602 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.516812 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhp6t\" (UniqueName: \"kubernetes.io/projected/dae431ee-6510-4c51-b099-96092c0b5b6f-kube-api-access-xhp6t\") pod \"dnsmasq-dns-5c9776ccc5-wkmcr\" (UID: \"dae431ee-6510-4c51-b099-96092c0b5b6f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wkmcr" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.519210 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.522137 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.540527 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-wkmcr" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.605056 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.605870 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/220bfbd9-0dca-427e-b78c-9acaf96f78b7-config-data-custom\") pod \"cinder-api-0\" (UID: \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\") " pod="openstack/cinder-api-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.605960 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/220bfbd9-0dca-427e-b78c-9acaf96f78b7-scripts\") pod \"cinder-api-0\" (UID: \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\") " pod="openstack/cinder-api-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.606002 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/220bfbd9-0dca-427e-b78c-9acaf96f78b7-config-data\") pod \"cinder-api-0\" (UID: \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\") " pod="openstack/cinder-api-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.606256 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220bfbd9-0dca-427e-b78c-9acaf96f78b7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\") " pod="openstack/cinder-api-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.606329 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/220bfbd9-0dca-427e-b78c-9acaf96f78b7-logs\") pod \"cinder-api-0\" (UID: \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\") " pod="openstack/cinder-api-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.606358 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5k6b\" (UniqueName: \"kubernetes.io/projected/220bfbd9-0dca-427e-b78c-9acaf96f78b7-kube-api-access-k5k6b\") pod \"cinder-api-0\" (UID: \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\") " pod="openstack/cinder-api-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.606451 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/220bfbd9-0dca-427e-b78c-9acaf96f78b7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\") " pod="openstack/cinder-api-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.707660 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/220bfbd9-0dca-427e-b78c-9acaf96f78b7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\") " pod="openstack/cinder-api-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.707728 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/220bfbd9-0dca-427e-b78c-9acaf96f78b7-config-data-custom\") pod \"cinder-api-0\" (UID: \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\") " pod="openstack/cinder-api-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.707760 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/220bfbd9-0dca-427e-b78c-9acaf96f78b7-scripts\") pod \"cinder-api-0\" (UID: \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\") " pod="openstack/cinder-api-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.707780 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/220bfbd9-0dca-427e-b78c-9acaf96f78b7-config-data\") pod \"cinder-api-0\" (UID: \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\") " pod="openstack/cinder-api-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.707792 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/220bfbd9-0dca-427e-b78c-9acaf96f78b7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\") " pod="openstack/cinder-api-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.707871 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220bfbd9-0dca-427e-b78c-9acaf96f78b7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\") " pod="openstack/cinder-api-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.707901 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/220bfbd9-0dca-427e-b78c-9acaf96f78b7-logs\") pod \"cinder-api-0\" (UID: \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\") " pod="openstack/cinder-api-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.707919 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5k6b\" (UniqueName: \"kubernetes.io/projected/220bfbd9-0dca-427e-b78c-9acaf96f78b7-kube-api-access-k5k6b\") pod \"cinder-api-0\" (UID: \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\") " pod="openstack/cinder-api-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.715783 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/220bfbd9-0dca-427e-b78c-9acaf96f78b7-logs\") pod \"cinder-api-0\" (UID: \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\") " pod="openstack/cinder-api-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.725424 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/220bfbd9-0dca-427e-b78c-9acaf96f78b7-config-data-custom\") pod \"cinder-api-0\" (UID: \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\") " pod="openstack/cinder-api-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.730126 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/220bfbd9-0dca-427e-b78c-9acaf96f78b7-scripts\") pod \"cinder-api-0\" (UID: \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\") " pod="openstack/cinder-api-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.730373 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5k6b\" (UniqueName: \"kubernetes.io/projected/220bfbd9-0dca-427e-b78c-9acaf96f78b7-kube-api-access-k5k6b\") pod \"cinder-api-0\" (UID: \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\") " pod="openstack/cinder-api-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.731143 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220bfbd9-0dca-427e-b78c-9acaf96f78b7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\") " pod="openstack/cinder-api-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.739481 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/220bfbd9-0dca-427e-b78c-9acaf96f78b7-config-data\") pod \"cinder-api-0\" (UID: \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\") " pod="openstack/cinder-api-0" Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.938450 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 11:13:52 crc kubenswrapper[4619]: I0126 11:13:52.951530 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 11:13:53 crc kubenswrapper[4619]: I0126 11:13:53.082484 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-wkmcr"] Jan 26 11:13:53 crc kubenswrapper[4619]: E0126 11:13:53.189287 4619 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 26 11:13:53 crc kubenswrapper[4619]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/c9fab432-c15e-42ed-8e2f-0593b72be3b8/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 26 11:13:53 crc kubenswrapper[4619]: > podSandboxID="c40cfb03b2db67b100692f1f9f1bd02c4860355e3eae296f97007b05dc070d89" Jan 26 11:13:53 crc kubenswrapper[4619]: E0126 11:13:53.189487 4619 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 26 11:13:53 crc kubenswrapper[4619]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7ch57ch5c5hcch589hf7h577h659h96h5c8h5b4h55fhbbh667h565h5bchcbh58dh7dh5bch586h56ch574h598h67dh5c8h56dh8bh574h564hbch7q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hwjkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-85ff748b95-lxhdc_openstack(c9fab432-c15e-42ed-8e2f-0593b72be3b8): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/c9fab432-c15e-42ed-8e2f-0593b72be3b8/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 26 11:13:53 crc kubenswrapper[4619]: > logger="UnhandledError" Jan 26 11:13:53 crc kubenswrapper[4619]: E0126 11:13:53.191843 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/c9fab432-c15e-42ed-8e2f-0593b72be3b8/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-85ff748b95-lxhdc" podUID="c9fab432-c15e-42ed-8e2f-0593b72be3b8" Jan 26 11:13:53 crc kubenswrapper[4619]: I0126 11:13:53.336791 4619 generic.go:334] "Generic (PLEG): container finished" podID="a59562e9-8459-4c22-a737-f6bde480fc2b" containerID="568063328cf375c869cbcd17174ade10f48edb94016e35549e0fe88d800a29d9" exitCode=0 Jan 26 11:13:53 crc kubenswrapper[4619]: I0126 11:13:53.337038 4619 generic.go:334] "Generic (PLEG): container finished" podID="a59562e9-8459-4c22-a737-f6bde480fc2b" containerID="79cb191465b8a7bf9e0cfebec9c66123e90081bcbde3d3834fed2928b741e6a6" exitCode=2 Jan 26 11:13:53 crc kubenswrapper[4619]: I0126 11:13:53.337118 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a59562e9-8459-4c22-a737-f6bde480fc2b","Type":"ContainerDied","Data":"568063328cf375c869cbcd17174ade10f48edb94016e35549e0fe88d800a29d9"} Jan 26 11:13:53 crc kubenswrapper[4619]: I0126 11:13:53.337145 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a59562e9-8459-4c22-a737-f6bde480fc2b","Type":"ContainerDied","Data":"79cb191465b8a7bf9e0cfebec9c66123e90081bcbde3d3834fed2928b741e6a6"} Jan 26 11:13:53 crc kubenswrapper[4619]: I0126 11:13:53.343120 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1f4725e4-fd9d-49c8-b4a4-04d9f855f285","Type":"ContainerStarted","Data":"dfdbb7e52381cccd9fc0a3f20ffe8c68228b1f3fb40b3ab6bf5a87f80acb5143"} Jan 26 11:13:53 crc kubenswrapper[4619]: I0126 11:13:53.363083 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b7745f6db-mwnfn" event={"ID":"6f6c5e4d-17c2-45e9-add1-0b35026ba69d","Type":"ContainerStarted","Data":"1d8deff64f610eca0b2e311b14fd4e35a567c21f9c5a90dc6ff5a2427a236525"} Jan 26 11:13:53 crc kubenswrapper[4619]: I0126 11:13:53.363126 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b7745f6db-mwnfn" event={"ID":"6f6c5e4d-17c2-45e9-add1-0b35026ba69d","Type":"ContainerStarted","Data":"4e02f1273032fb82decbba3cfca310f9ace070861ea3b31d5a2089b928c7df0d"} Jan 26 11:13:53 crc kubenswrapper[4619]: I0126 11:13:53.364168 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-b7745f6db-mwnfn" Jan 26 11:13:53 crc kubenswrapper[4619]: I0126 11:13:53.364195 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-b7745f6db-mwnfn" Jan 26 11:13:53 crc kubenswrapper[4619]: I0126 11:13:53.368581 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-wkmcr" event={"ID":"dae431ee-6510-4c51-b099-96092c0b5b6f","Type":"ContainerStarted","Data":"7a5e8885f21b7a3e89308cdf633599c9549a771ca8f748e5c2b019d7b7637789"} Jan 26 11:13:53 crc kubenswrapper[4619]: I0126 11:13:53.430644 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-b7745f6db-mwnfn" podStartSLOduration=3.430626299 podStartE2EDuration="3.430626299s" podCreationTimestamp="2026-01-26 11:13:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:13:53.426599437 +0000 UTC m=+1132.460640153" watchObservedRunningTime="2026-01-26 11:13:53.430626299 +0000 UTC m=+1132.464667005" Jan 26 11:13:53 crc kubenswrapper[4619]: I0126 11:13:53.707424 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 11:13:53 crc kubenswrapper[4619]: W0126 11:13:53.932463 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod220bfbd9_0dca_427e_b78c_9acaf96f78b7.slice/crio-f68f1d3f35faca092331e7cacbce10bcc6f67f378e77dd5e2d66ba57c3e14e4c WatchSource:0}: Error finding container f68f1d3f35faca092331e7cacbce10bcc6f67f378e77dd5e2d66ba57c3e14e4c: Status 404 returned error can't find the container with id f68f1d3f35faca092331e7cacbce10bcc6f67f378e77dd5e2d66ba57c3e14e4c Jan 26 11:13:53 crc kubenswrapper[4619]: I0126 11:13:53.995328 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-lxhdc" Jan 26 11:13:54 crc kubenswrapper[4619]: I0126 11:13:54.048307 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9fab432-c15e-42ed-8e2f-0593b72be3b8-dns-svc\") pod \"c9fab432-c15e-42ed-8e2f-0593b72be3b8\" (UID: \"c9fab432-c15e-42ed-8e2f-0593b72be3b8\") " Jan 26 11:13:54 crc kubenswrapper[4619]: I0126 11:13:54.048412 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwjkq\" (UniqueName: \"kubernetes.io/projected/c9fab432-c15e-42ed-8e2f-0593b72be3b8-kube-api-access-hwjkq\") pod \"c9fab432-c15e-42ed-8e2f-0593b72be3b8\" (UID: \"c9fab432-c15e-42ed-8e2f-0593b72be3b8\") " Jan 26 11:13:54 crc kubenswrapper[4619]: I0126 11:13:54.048436 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9fab432-c15e-42ed-8e2f-0593b72be3b8-ovsdbserver-nb\") pod \"c9fab432-c15e-42ed-8e2f-0593b72be3b8\" (UID: \"c9fab432-c15e-42ed-8e2f-0593b72be3b8\") " Jan 26 11:13:54 crc kubenswrapper[4619]: I0126 11:13:54.048484 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9fab432-c15e-42ed-8e2f-0593b72be3b8-config\") pod \"c9fab432-c15e-42ed-8e2f-0593b72be3b8\" (UID: \"c9fab432-c15e-42ed-8e2f-0593b72be3b8\") " Jan 26 11:13:54 crc kubenswrapper[4619]: I0126 11:13:54.048557 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9fab432-c15e-42ed-8e2f-0593b72be3b8-ovsdbserver-sb\") pod \"c9fab432-c15e-42ed-8e2f-0593b72be3b8\" (UID: \"c9fab432-c15e-42ed-8e2f-0593b72be3b8\") " Jan 26 11:13:54 crc kubenswrapper[4619]: I0126 11:13:54.048630 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c9fab432-c15e-42ed-8e2f-0593b72be3b8-dns-swift-storage-0\") pod \"c9fab432-c15e-42ed-8e2f-0593b72be3b8\" (UID: \"c9fab432-c15e-42ed-8e2f-0593b72be3b8\") " Jan 26 11:13:54 crc kubenswrapper[4619]: I0126 11:13:54.065733 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9fab432-c15e-42ed-8e2f-0593b72be3b8-kube-api-access-hwjkq" (OuterVolumeSpecName: "kube-api-access-hwjkq") pod "c9fab432-c15e-42ed-8e2f-0593b72be3b8" (UID: "c9fab432-c15e-42ed-8e2f-0593b72be3b8"). InnerVolumeSpecName "kube-api-access-hwjkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:13:54 crc kubenswrapper[4619]: I0126 11:13:54.151824 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwjkq\" (UniqueName: \"kubernetes.io/projected/c9fab432-c15e-42ed-8e2f-0593b72be3b8-kube-api-access-hwjkq\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:54 crc kubenswrapper[4619]: I0126 11:13:54.169504 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9fab432-c15e-42ed-8e2f-0593b72be3b8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c9fab432-c15e-42ed-8e2f-0593b72be3b8" (UID: "c9fab432-c15e-42ed-8e2f-0593b72be3b8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:13:54 crc kubenswrapper[4619]: I0126 11:13:54.173379 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9fab432-c15e-42ed-8e2f-0593b72be3b8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c9fab432-c15e-42ed-8e2f-0593b72be3b8" (UID: "c9fab432-c15e-42ed-8e2f-0593b72be3b8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:13:54 crc kubenswrapper[4619]: I0126 11:13:54.185006 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9fab432-c15e-42ed-8e2f-0593b72be3b8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c9fab432-c15e-42ed-8e2f-0593b72be3b8" (UID: "c9fab432-c15e-42ed-8e2f-0593b72be3b8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:13:54 crc kubenswrapper[4619]: I0126 11:13:54.188141 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9fab432-c15e-42ed-8e2f-0593b72be3b8-config" (OuterVolumeSpecName: "config") pod "c9fab432-c15e-42ed-8e2f-0593b72be3b8" (UID: "c9fab432-c15e-42ed-8e2f-0593b72be3b8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:13:54 crc kubenswrapper[4619]: I0126 11:13:54.241348 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9fab432-c15e-42ed-8e2f-0593b72be3b8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c9fab432-c15e-42ed-8e2f-0593b72be3b8" (UID: "c9fab432-c15e-42ed-8e2f-0593b72be3b8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:13:54 crc kubenswrapper[4619]: I0126 11:13:54.253897 4619 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9fab432-c15e-42ed-8e2f-0593b72be3b8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:54 crc kubenswrapper[4619]: I0126 11:13:54.253938 4619 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9fab432-c15e-42ed-8e2f-0593b72be3b8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:54 crc kubenswrapper[4619]: I0126 11:13:54.253950 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9fab432-c15e-42ed-8e2f-0593b72be3b8-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:54 crc kubenswrapper[4619]: I0126 11:13:54.253962 4619 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9fab432-c15e-42ed-8e2f-0593b72be3b8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:54 crc kubenswrapper[4619]: I0126 11:13:54.253974 4619 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c9fab432-c15e-42ed-8e2f-0593b72be3b8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:54 crc kubenswrapper[4619]: I0126 11:13:54.410134 4619 generic.go:334] "Generic (PLEG): container finished" podID="dae431ee-6510-4c51-b099-96092c0b5b6f" containerID="fa7bc46e92a3aaadd08d72231e65445ef8a48645a5946c9a954c739a9cea691a" exitCode=0 Jan 26 11:13:54 crc kubenswrapper[4619]: I0126 11:13:54.411030 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-wkmcr" event={"ID":"dae431ee-6510-4c51-b099-96092c0b5b6f","Type":"ContainerDied","Data":"fa7bc46e92a3aaadd08d72231e65445ef8a48645a5946c9a954c739a9cea691a"} Jan 26 11:13:54 crc kubenswrapper[4619]: I0126 11:13:54.420461 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"220bfbd9-0dca-427e-b78c-9acaf96f78b7","Type":"ContainerStarted","Data":"f68f1d3f35faca092331e7cacbce10bcc6f67f378e77dd5e2d66ba57c3e14e4c"} Jan 26 11:13:54 crc kubenswrapper[4619]: I0126 11:13:54.424137 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-lxhdc" Jan 26 11:13:54 crc kubenswrapper[4619]: I0126 11:13:54.426200 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-lxhdc" event={"ID":"c9fab432-c15e-42ed-8e2f-0593b72be3b8","Type":"ContainerDied","Data":"c40cfb03b2db67b100692f1f9f1bd02c4860355e3eae296f97007b05dc070d89"} Jan 26 11:13:54 crc kubenswrapper[4619]: I0126 11:13:54.426261 4619 scope.go:117] "RemoveContainer" containerID="e3a4592fe916b0bfdd1ac1c8348e991b49588d2a58b0c1f74d5534eb42dc40a0" Jan 26 11:13:54 crc kubenswrapper[4619]: I0126 11:13:54.523226 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-lxhdc"] Jan 26 11:13:54 crc kubenswrapper[4619]: I0126 11:13:54.541151 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-lxhdc"] Jan 26 11:13:55 crc kubenswrapper[4619]: I0126 11:13:55.009254 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 11:13:55 crc kubenswrapper[4619]: I0126 11:13:55.103819 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6f67c775d4-7ls4r" podUID="670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 26 11:13:55 crc kubenswrapper[4619]: I0126 11:13:55.271696 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9fab432-c15e-42ed-8e2f-0593b72be3b8" path="/var/lib/kubelet/pods/c9fab432-c15e-42ed-8e2f-0593b72be3b8/volumes" Jan 26 11:13:55 crc kubenswrapper[4619]: I0126 11:13:55.306737 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-846d64d6c4-66jvl" podUID="10c8ed10-dab5-49e5-a030-4be99c720ae0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 26 11:13:55 crc kubenswrapper[4619]: I0126 11:13:55.438478 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1f4725e4-fd9d-49c8-b4a4-04d9f855f285","Type":"ContainerStarted","Data":"0b91a7eac347326752ee76e49847b194f622429e2c3ca40b2c4d70df6805bf72"} Jan 26 11:13:55 crc kubenswrapper[4619]: I0126 11:13:55.445987 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"220bfbd9-0dca-427e-b78c-9acaf96f78b7","Type":"ContainerStarted","Data":"434cdbbad051dcd51a146c15dc2836035532f70bedab1f6b50a8707704d13a77"} Jan 26 11:13:56 crc kubenswrapper[4619]: I0126 11:13:56.458144 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-wkmcr" event={"ID":"dae431ee-6510-4c51-b099-96092c0b5b6f","Type":"ContainerStarted","Data":"28d76bd31ea7c1fdcd5db94005da64bf4f1e46c963c56d035ac6c6e1f1e11ed7"} Jan 26 11:13:56 crc kubenswrapper[4619]: I0126 11:13:56.459673 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-wkmcr" Jan 26 11:13:56 crc kubenswrapper[4619]: I0126 11:13:56.459721 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b76d57d79-c2tm5" event={"ID":"cd4a2072-c71c-42f6-940e-35435fc350c7","Type":"ContainerStarted","Data":"cb7960baba1026b34be1271526a66f68d44ac5e879cdd1e333d854ae478f16bc"} Jan 26 11:13:56 crc kubenswrapper[4619]: I0126 11:13:56.471043 4619 generic.go:334] "Generic (PLEG): container finished" podID="a59562e9-8459-4c22-a737-f6bde480fc2b" containerID="7e3704a57885271c7faeef3d26990e8c17acae48718889e993d22d753a805304" exitCode=0 Jan 26 11:13:56 crc kubenswrapper[4619]: I0126 11:13:56.471122 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a59562e9-8459-4c22-a737-f6bde480fc2b","Type":"ContainerDied","Data":"7e3704a57885271c7faeef3d26990e8c17acae48718889e993d22d753a805304"} Jan 26 11:13:56 crc kubenswrapper[4619]: I0126 11:13:56.507271 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-wkmcr" podStartSLOduration=4.507255647 podStartE2EDuration="4.507255647s" podCreationTimestamp="2026-01-26 11:13:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:13:56.484512297 +0000 UTC m=+1135.518553013" watchObservedRunningTime="2026-01-26 11:13:56.507255647 +0000 UTC m=+1135.541296363" Jan 26 11:13:56 crc kubenswrapper[4619]: I0126 11:13:56.508067 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5cd59c79cd-lqtz6" event={"ID":"827c156d-633b-414a-93ef-07d73ba79785","Type":"ContainerStarted","Data":"b51f7fa6c1ffe694bb0126eb4f8dd3af552323e475f864e64d1fe565ac11695c"} Jan 26 11:13:56 crc kubenswrapper[4619]: I0126 11:13:56.601351 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:13:56 crc kubenswrapper[4619]: I0126 11:13:56.726654 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a59562e9-8459-4c22-a737-f6bde480fc2b-sg-core-conf-yaml\") pod \"a59562e9-8459-4c22-a737-f6bde480fc2b\" (UID: \"a59562e9-8459-4c22-a737-f6bde480fc2b\") " Jan 26 11:13:56 crc kubenswrapper[4619]: I0126 11:13:56.726717 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a59562e9-8459-4c22-a737-f6bde480fc2b-run-httpd\") pod \"a59562e9-8459-4c22-a737-f6bde480fc2b\" (UID: \"a59562e9-8459-4c22-a737-f6bde480fc2b\") " Jan 26 11:13:56 crc kubenswrapper[4619]: I0126 11:13:56.726835 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a59562e9-8459-4c22-a737-f6bde480fc2b-config-data\") pod \"a59562e9-8459-4c22-a737-f6bde480fc2b\" (UID: \"a59562e9-8459-4c22-a737-f6bde480fc2b\") " Jan 26 11:13:56 crc kubenswrapper[4619]: I0126 11:13:56.726904 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a59562e9-8459-4c22-a737-f6bde480fc2b-combined-ca-bundle\") pod \"a59562e9-8459-4c22-a737-f6bde480fc2b\" (UID: \"a59562e9-8459-4c22-a737-f6bde480fc2b\") " Jan 26 11:13:56 crc kubenswrapper[4619]: I0126 11:13:56.727323 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a59562e9-8459-4c22-a737-f6bde480fc2b-log-httpd\") pod \"a59562e9-8459-4c22-a737-f6bde480fc2b\" (UID: \"a59562e9-8459-4c22-a737-f6bde480fc2b\") " Jan 26 11:13:56 crc kubenswrapper[4619]: I0126 11:13:56.727756 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjvwq\" (UniqueName: \"kubernetes.io/projected/a59562e9-8459-4c22-a737-f6bde480fc2b-kube-api-access-rjvwq\") pod \"a59562e9-8459-4c22-a737-f6bde480fc2b\" (UID: \"a59562e9-8459-4c22-a737-f6bde480fc2b\") " Jan 26 11:13:56 crc kubenswrapper[4619]: I0126 11:13:56.727204 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a59562e9-8459-4c22-a737-f6bde480fc2b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a59562e9-8459-4c22-a737-f6bde480fc2b" (UID: "a59562e9-8459-4c22-a737-f6bde480fc2b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:13:56 crc kubenswrapper[4619]: I0126 11:13:56.727697 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a59562e9-8459-4c22-a737-f6bde480fc2b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a59562e9-8459-4c22-a737-f6bde480fc2b" (UID: "a59562e9-8459-4c22-a737-f6bde480fc2b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:13:56 crc kubenswrapper[4619]: I0126 11:13:56.727841 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a59562e9-8459-4c22-a737-f6bde480fc2b-scripts\") pod \"a59562e9-8459-4c22-a737-f6bde480fc2b\" (UID: \"a59562e9-8459-4c22-a737-f6bde480fc2b\") " Jan 26 11:13:56 crc kubenswrapper[4619]: I0126 11:13:56.728726 4619 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a59562e9-8459-4c22-a737-f6bde480fc2b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:56 crc kubenswrapper[4619]: I0126 11:13:56.728759 4619 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a59562e9-8459-4c22-a737-f6bde480fc2b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:56 crc kubenswrapper[4619]: I0126 11:13:56.732836 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a59562e9-8459-4c22-a737-f6bde480fc2b-kube-api-access-rjvwq" (OuterVolumeSpecName: "kube-api-access-rjvwq") pod "a59562e9-8459-4c22-a737-f6bde480fc2b" (UID: "a59562e9-8459-4c22-a737-f6bde480fc2b"). InnerVolumeSpecName "kube-api-access-rjvwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:13:56 crc kubenswrapper[4619]: I0126 11:13:56.733062 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a59562e9-8459-4c22-a737-f6bde480fc2b-scripts" (OuterVolumeSpecName: "scripts") pod "a59562e9-8459-4c22-a737-f6bde480fc2b" (UID: "a59562e9-8459-4c22-a737-f6bde480fc2b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:13:56 crc kubenswrapper[4619]: I0126 11:13:56.799747 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a59562e9-8459-4c22-a737-f6bde480fc2b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a59562e9-8459-4c22-a737-f6bde480fc2b" (UID: "a59562e9-8459-4c22-a737-f6bde480fc2b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:13:56 crc kubenswrapper[4619]: I0126 11:13:56.832388 4619 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a59562e9-8459-4c22-a737-f6bde480fc2b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:56 crc kubenswrapper[4619]: I0126 11:13:56.832413 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjvwq\" (UniqueName: \"kubernetes.io/projected/a59562e9-8459-4c22-a737-f6bde480fc2b-kube-api-access-rjvwq\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:56 crc kubenswrapper[4619]: I0126 11:13:56.832424 4619 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a59562e9-8459-4c22-a737-f6bde480fc2b-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:56 crc kubenswrapper[4619]: I0126 11:13:56.866636 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a59562e9-8459-4c22-a737-f6bde480fc2b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a59562e9-8459-4c22-a737-f6bde480fc2b" (UID: "a59562e9-8459-4c22-a737-f6bde480fc2b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:13:56 crc kubenswrapper[4619]: I0126 11:13:56.912935 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a59562e9-8459-4c22-a737-f6bde480fc2b-config-data" (OuterVolumeSpecName: "config-data") pod "a59562e9-8459-4c22-a737-f6bde480fc2b" (UID: "a59562e9-8459-4c22-a737-f6bde480fc2b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:13:56 crc kubenswrapper[4619]: I0126 11:13:56.934687 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a59562e9-8459-4c22-a737-f6bde480fc2b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:56 crc kubenswrapper[4619]: I0126 11:13:56.934903 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a59562e9-8459-4c22-a737-f6bde480fc2b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.255227 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-668f5b9c84-9qvth"] Jan 26 11:13:57 crc kubenswrapper[4619]: E0126 11:13:57.255628 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59562e9-8459-4c22-a737-f6bde480fc2b" containerName="proxy-httpd" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.255645 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59562e9-8459-4c22-a737-f6bde480fc2b" containerName="proxy-httpd" Jan 26 11:13:57 crc kubenswrapper[4619]: E0126 11:13:57.255661 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59562e9-8459-4c22-a737-f6bde480fc2b" containerName="sg-core" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.255667 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59562e9-8459-4c22-a737-f6bde480fc2b" containerName="sg-core" Jan 26 11:13:57 crc kubenswrapper[4619]: E0126 11:13:57.255689 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9fab432-c15e-42ed-8e2f-0593b72be3b8" containerName="init" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.255697 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9fab432-c15e-42ed-8e2f-0593b72be3b8" containerName="init" Jan 26 11:13:57 crc kubenswrapper[4619]: E0126 11:13:57.255711 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59562e9-8459-4c22-a737-f6bde480fc2b" containerName="ceilometer-notification-agent" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.255716 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59562e9-8459-4c22-a737-f6bde480fc2b" containerName="ceilometer-notification-agent" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.255879 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9fab432-c15e-42ed-8e2f-0593b72be3b8" containerName="init" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.255892 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="a59562e9-8459-4c22-a737-f6bde480fc2b" containerName="proxy-httpd" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.255907 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="a59562e9-8459-4c22-a737-f6bde480fc2b" containerName="sg-core" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.255925 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="a59562e9-8459-4c22-a737-f6bde480fc2b" containerName="ceilometer-notification-agent" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.257008 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-668f5b9c84-9qvth" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.259076 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.265380 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.306137 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-668f5b9c84-9qvth"] Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.413785 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-66768f896-nrg7c" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.441924 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16edc018-6152-42d9-aa2d-70de2c9851f3-public-tls-certs\") pod \"barbican-api-668f5b9c84-9qvth\" (UID: \"16edc018-6152-42d9-aa2d-70de2c9851f3\") " pod="openstack/barbican-api-668f5b9c84-9qvth" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.442712 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16edc018-6152-42d9-aa2d-70de2c9851f3-logs\") pod \"barbican-api-668f5b9c84-9qvth\" (UID: \"16edc018-6152-42d9-aa2d-70de2c9851f3\") " pod="openstack/barbican-api-668f5b9c84-9qvth" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.442894 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16edc018-6152-42d9-aa2d-70de2c9851f3-combined-ca-bundle\") pod \"barbican-api-668f5b9c84-9qvth\" (UID: \"16edc018-6152-42d9-aa2d-70de2c9851f3\") " pod="openstack/barbican-api-668f5b9c84-9qvth" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.442940 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16edc018-6152-42d9-aa2d-70de2c9851f3-internal-tls-certs\") pod \"barbican-api-668f5b9c84-9qvth\" (UID: \"16edc018-6152-42d9-aa2d-70de2c9851f3\") " pod="openstack/barbican-api-668f5b9c84-9qvth" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.442961 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16edc018-6152-42d9-aa2d-70de2c9851f3-config-data\") pod \"barbican-api-668f5b9c84-9qvth\" (UID: \"16edc018-6152-42d9-aa2d-70de2c9851f3\") " pod="openstack/barbican-api-668f5b9c84-9qvth" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.442981 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szsk4\" (UniqueName: \"kubernetes.io/projected/16edc018-6152-42d9-aa2d-70de2c9851f3-kube-api-access-szsk4\") pod \"barbican-api-668f5b9c84-9qvth\" (UID: \"16edc018-6152-42d9-aa2d-70de2c9851f3\") " pod="openstack/barbican-api-668f5b9c84-9qvth" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.443018 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16edc018-6152-42d9-aa2d-70de2c9851f3-config-data-custom\") pod \"barbican-api-668f5b9c84-9qvth\" (UID: \"16edc018-6152-42d9-aa2d-70de2c9851f3\") " pod="openstack/barbican-api-668f5b9c84-9qvth" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.519971 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"220bfbd9-0dca-427e-b78c-9acaf96f78b7","Type":"ContainerStarted","Data":"02b8e9edf6a3f034c44a376424d90aa9fd3331a69e786d99483b263ef38858cb"} Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.520137 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="220bfbd9-0dca-427e-b78c-9acaf96f78b7" containerName="cinder-api-log" containerID="cri-o://434cdbbad051dcd51a146c15dc2836035532f70bedab1f6b50a8707704d13a77" gracePeriod=30 Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.520351 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.520367 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="220bfbd9-0dca-427e-b78c-9acaf96f78b7" containerName="cinder-api" containerID="cri-o://02b8e9edf6a3f034c44a376424d90aa9fd3331a69e786d99483b263ef38858cb" gracePeriod=30 Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.522756 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b76d57d79-c2tm5" event={"ID":"cd4a2072-c71c-42f6-940e-35435fc350c7","Type":"ContainerStarted","Data":"7cd515481e20935b799024fc969cfa4bcffb4aedc5db5626fd3f8a1c86c21cad"} Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.529911 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a59562e9-8459-4c22-a737-f6bde480fc2b","Type":"ContainerDied","Data":"2de32133eec8fa8e3c3169bdcb82d5fc750b7904abd8864e2b80f6fd9ed7d1bc"} Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.529956 4619 scope.go:117] "RemoveContainer" containerID="568063328cf375c869cbcd17174ade10f48edb94016e35549e0fe88d800a29d9" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.530089 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.548122 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16edc018-6152-42d9-aa2d-70de2c9851f3-public-tls-certs\") pod \"barbican-api-668f5b9c84-9qvth\" (UID: \"16edc018-6152-42d9-aa2d-70de2c9851f3\") " pod="openstack/barbican-api-668f5b9c84-9qvth" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.549329 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16edc018-6152-42d9-aa2d-70de2c9851f3-logs\") pod \"barbican-api-668f5b9c84-9qvth\" (UID: \"16edc018-6152-42d9-aa2d-70de2c9851f3\") " pod="openstack/barbican-api-668f5b9c84-9qvth" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.549409 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16edc018-6152-42d9-aa2d-70de2c9851f3-combined-ca-bundle\") pod \"barbican-api-668f5b9c84-9qvth\" (UID: \"16edc018-6152-42d9-aa2d-70de2c9851f3\") " pod="openstack/barbican-api-668f5b9c84-9qvth" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.549462 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16edc018-6152-42d9-aa2d-70de2c9851f3-internal-tls-certs\") pod \"barbican-api-668f5b9c84-9qvth\" (UID: \"16edc018-6152-42d9-aa2d-70de2c9851f3\") " pod="openstack/barbican-api-668f5b9c84-9qvth" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.549485 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16edc018-6152-42d9-aa2d-70de2c9851f3-config-data\") pod \"barbican-api-668f5b9c84-9qvth\" (UID: \"16edc018-6152-42d9-aa2d-70de2c9851f3\") " pod="openstack/barbican-api-668f5b9c84-9qvth" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.549513 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szsk4\" (UniqueName: \"kubernetes.io/projected/16edc018-6152-42d9-aa2d-70de2c9851f3-kube-api-access-szsk4\") pod \"barbican-api-668f5b9c84-9qvth\" (UID: \"16edc018-6152-42d9-aa2d-70de2c9851f3\") " pod="openstack/barbican-api-668f5b9c84-9qvth" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.549560 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16edc018-6152-42d9-aa2d-70de2c9851f3-config-data-custom\") pod \"barbican-api-668f5b9c84-9qvth\" (UID: \"16edc018-6152-42d9-aa2d-70de2c9851f3\") " pod="openstack/barbican-api-668f5b9c84-9qvth" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.550606 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16edc018-6152-42d9-aa2d-70de2c9851f3-logs\") pod \"barbican-api-668f5b9c84-9qvth\" (UID: \"16edc018-6152-42d9-aa2d-70de2c9851f3\") " pod="openstack/barbican-api-668f5b9c84-9qvth" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.564486 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5cd59c79cd-lqtz6" event={"ID":"827c156d-633b-414a-93ef-07d73ba79785","Type":"ContainerStarted","Data":"e633d9fb9434214aa9b9ae53ee30596db489c260f5caaa3a5e6bb289cbd00174"} Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.565670 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16edc018-6152-42d9-aa2d-70de2c9851f3-combined-ca-bundle\") pod \"barbican-api-668f5b9c84-9qvth\" (UID: \"16edc018-6152-42d9-aa2d-70de2c9851f3\") " pod="openstack/barbican-api-668f5b9c84-9qvth" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.567163 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16edc018-6152-42d9-aa2d-70de2c9851f3-internal-tls-certs\") pod \"barbican-api-668f5b9c84-9qvth\" (UID: \"16edc018-6152-42d9-aa2d-70de2c9851f3\") " pod="openstack/barbican-api-668f5b9c84-9qvth" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.568091 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16edc018-6152-42d9-aa2d-70de2c9851f3-config-data-custom\") pod \"barbican-api-668f5b9c84-9qvth\" (UID: \"16edc018-6152-42d9-aa2d-70de2c9851f3\") " pod="openstack/barbican-api-668f5b9c84-9qvth" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.578445 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16edc018-6152-42d9-aa2d-70de2c9851f3-config-data\") pod \"barbican-api-668f5b9c84-9qvth\" (UID: \"16edc018-6152-42d9-aa2d-70de2c9851f3\") " pod="openstack/barbican-api-668f5b9c84-9qvth" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.578869 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16edc018-6152-42d9-aa2d-70de2c9851f3-public-tls-certs\") pod \"barbican-api-668f5b9c84-9qvth\" (UID: \"16edc018-6152-42d9-aa2d-70de2c9851f3\") " pod="openstack/barbican-api-668f5b9c84-9qvth" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.585368 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1f4725e4-fd9d-49c8-b4a4-04d9f855f285","Type":"ContainerStarted","Data":"cad175bd2369560127c601ec5540274412d35237d7a2b27d95d9f52dc88fedce"} Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.593366 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szsk4\" (UniqueName: \"kubernetes.io/projected/16edc018-6152-42d9-aa2d-70de2c9851f3-kube-api-access-szsk4\") pod \"barbican-api-668f5b9c84-9qvth\" (UID: \"16edc018-6152-42d9-aa2d-70de2c9851f3\") " pod="openstack/barbican-api-668f5b9c84-9qvth" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.593788 4619 scope.go:117] "RemoveContainer" containerID="79cb191465b8a7bf9e0cfebec9c66123e90081bcbde3d3834fed2928b741e6a6" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.601577 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-668f5b9c84-9qvth" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.607605 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.607582328 podStartE2EDuration="5.607582328s" podCreationTimestamp="2026-01-26 11:13:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:13:57.547098303 +0000 UTC m=+1136.581139019" watchObservedRunningTime="2026-01-26 11:13:57.607582328 +0000 UTC m=+1136.641623044" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.682768 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.722294 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.744822 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5b76d57d79-c2tm5" podStartSLOduration=4.741982994 podStartE2EDuration="8.74480449s" podCreationTimestamp="2026-01-26 11:13:49 +0000 UTC" firstStartedPulling="2026-01-26 11:13:51.715702222 +0000 UTC m=+1130.749742938" lastFinishedPulling="2026-01-26 11:13:55.718523718 +0000 UTC m=+1134.752564434" observedRunningTime="2026-01-26 11:13:57.653279884 +0000 UTC m=+1136.687320610" watchObservedRunningTime="2026-01-26 11:13:57.74480449 +0000 UTC m=+1136.778845206" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.745794 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.748442 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.759144 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.761898 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92c0c735-b411-4f87-9dcd-4cf4565ba828-run-httpd\") pod \"ceilometer-0\" (UID: \"92c0c735-b411-4f87-9dcd-4cf4565ba828\") " pod="openstack/ceilometer-0" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.761931 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdqqt\" (UniqueName: \"kubernetes.io/projected/92c0c735-b411-4f87-9dcd-4cf4565ba828-kube-api-access-mdqqt\") pod \"ceilometer-0\" (UID: \"92c0c735-b411-4f87-9dcd-4cf4565ba828\") " pod="openstack/ceilometer-0" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.762003 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92c0c735-b411-4f87-9dcd-4cf4565ba828-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"92c0c735-b411-4f87-9dcd-4cf4565ba828\") " pod="openstack/ceilometer-0" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.762033 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92c0c735-b411-4f87-9dcd-4cf4565ba828-log-httpd\") pod \"ceilometer-0\" (UID: \"92c0c735-b411-4f87-9dcd-4cf4565ba828\") " pod="openstack/ceilometer-0" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.762079 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/92c0c735-b411-4f87-9dcd-4cf4565ba828-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"92c0c735-b411-4f87-9dcd-4cf4565ba828\") " pod="openstack/ceilometer-0" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.762093 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92c0c735-b411-4f87-9dcd-4cf4565ba828-scripts\") pod \"ceilometer-0\" (UID: \"92c0c735-b411-4f87-9dcd-4cf4565ba828\") " pod="openstack/ceilometer-0" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.762125 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92c0c735-b411-4f87-9dcd-4cf4565ba828-config-data\") pod \"ceilometer-0\" (UID: \"92c0c735-b411-4f87-9dcd-4cf4565ba828\") " pod="openstack/ceilometer-0" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.780340 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.823678 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.835569 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5cd59c79cd-lqtz6" podStartSLOduration=4.914827283 podStartE2EDuration="8.835549494s" podCreationTimestamp="2026-01-26 11:13:49 +0000 UTC" firstStartedPulling="2026-01-26 11:13:51.74703805 +0000 UTC m=+1130.781078766" lastFinishedPulling="2026-01-26 11:13:55.667760261 +0000 UTC m=+1134.701800977" observedRunningTime="2026-01-26 11:13:57.705005237 +0000 UTC m=+1136.739045953" watchObservedRunningTime="2026-01-26 11:13:57.835549494 +0000 UTC m=+1136.869590210" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.843401 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.816793531 podStartE2EDuration="5.84338675s" podCreationTimestamp="2026-01-26 11:13:52 +0000 UTC" firstStartedPulling="2026-01-26 11:13:52.980808517 +0000 UTC m=+1132.014849233" lastFinishedPulling="2026-01-26 11:13:54.007401736 +0000 UTC m=+1133.041442452" observedRunningTime="2026-01-26 11:13:57.74373161 +0000 UTC m=+1136.777772326" watchObservedRunningTime="2026-01-26 11:13:57.84338675 +0000 UTC m=+1136.877427466" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.858964 4619 scope.go:117] "RemoveContainer" containerID="7e3704a57885271c7faeef3d26990e8c17acae48718889e993d22d753a805304" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.864788 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92c0c735-b411-4f87-9dcd-4cf4565ba828-run-httpd\") pod \"ceilometer-0\" (UID: \"92c0c735-b411-4f87-9dcd-4cf4565ba828\") " pod="openstack/ceilometer-0" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.864831 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdqqt\" (UniqueName: \"kubernetes.io/projected/92c0c735-b411-4f87-9dcd-4cf4565ba828-kube-api-access-mdqqt\") pod \"ceilometer-0\" (UID: \"92c0c735-b411-4f87-9dcd-4cf4565ba828\") " pod="openstack/ceilometer-0" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.864892 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92c0c735-b411-4f87-9dcd-4cf4565ba828-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"92c0c735-b411-4f87-9dcd-4cf4565ba828\") " pod="openstack/ceilometer-0" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.864936 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92c0c735-b411-4f87-9dcd-4cf4565ba828-log-httpd\") pod \"ceilometer-0\" (UID: \"92c0c735-b411-4f87-9dcd-4cf4565ba828\") " pod="openstack/ceilometer-0" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.864967 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/92c0c735-b411-4f87-9dcd-4cf4565ba828-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"92c0c735-b411-4f87-9dcd-4cf4565ba828\") " pod="openstack/ceilometer-0" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.864984 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92c0c735-b411-4f87-9dcd-4cf4565ba828-scripts\") pod \"ceilometer-0\" (UID: \"92c0c735-b411-4f87-9dcd-4cf4565ba828\") " pod="openstack/ceilometer-0" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.865007 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92c0c735-b411-4f87-9dcd-4cf4565ba828-config-data\") pod \"ceilometer-0\" (UID: \"92c0c735-b411-4f87-9dcd-4cf4565ba828\") " pod="openstack/ceilometer-0" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.866042 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92c0c735-b411-4f87-9dcd-4cf4565ba828-run-httpd\") pod \"ceilometer-0\" (UID: \"92c0c735-b411-4f87-9dcd-4cf4565ba828\") " pod="openstack/ceilometer-0" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.866354 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92c0c735-b411-4f87-9dcd-4cf4565ba828-log-httpd\") pod \"ceilometer-0\" (UID: \"92c0c735-b411-4f87-9dcd-4cf4565ba828\") " pod="openstack/ceilometer-0" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.897783 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92c0c735-b411-4f87-9dcd-4cf4565ba828-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"92c0c735-b411-4f87-9dcd-4cf4565ba828\") " pod="openstack/ceilometer-0" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.898800 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92c0c735-b411-4f87-9dcd-4cf4565ba828-scripts\") pod \"ceilometer-0\" (UID: \"92c0c735-b411-4f87-9dcd-4cf4565ba828\") " pod="openstack/ceilometer-0" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.900803 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/92c0c735-b411-4f87-9dcd-4cf4565ba828-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"92c0c735-b411-4f87-9dcd-4cf4565ba828\") " pod="openstack/ceilometer-0" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.905404 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdqqt\" (UniqueName: \"kubernetes.io/projected/92c0c735-b411-4f87-9dcd-4cf4565ba828-kube-api-access-mdqqt\") pod \"ceilometer-0\" (UID: \"92c0c735-b411-4f87-9dcd-4cf4565ba828\") " pod="openstack/ceilometer-0" Jan 26 11:13:57 crc kubenswrapper[4619]: I0126 11:13:57.928192 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92c0c735-b411-4f87-9dcd-4cf4565ba828-config-data\") pod \"ceilometer-0\" (UID: \"92c0c735-b411-4f87-9dcd-4cf4565ba828\") " pod="openstack/ceilometer-0" Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.048219 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6f447695c7-n4pxf"] Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.048569 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6f447695c7-n4pxf" podUID="6b43b93b-1dc4-4498-b464-30609f8788c3" containerName="neutron-api" containerID="cri-o://499daccc20383486ab3bbd336b1ebd266ea8d7366c1d923903ebeb85b451bdce" gracePeriod=30 Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.049132 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6f447695c7-n4pxf" podUID="6b43b93b-1dc4-4498-b464-30609f8788c3" containerName="neutron-httpd" containerID="cri-o://a4133e694ff84cac8215b31d214b2b572d80ad955195facd4b64de2c6d5185c6" gracePeriod=30 Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.092916 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-54d868dd9-v7bwm"] Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.096374 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54d868dd9-v7bwm" Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.112590 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54d868dd9-v7bwm"] Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.121906 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.177079 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6f447695c7-n4pxf" podUID="6b43b93b-1dc4-4498-b464-30609f8788c3" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.158:9696/\": read tcp 10.217.0.2:33678->10.217.0.158:9696: read: connection reset by peer" Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.283017 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5841244-b607-41b5-981c-1bb78b997411-ovndb-tls-certs\") pod \"neutron-54d868dd9-v7bwm\" (UID: \"f5841244-b607-41b5-981c-1bb78b997411\") " pod="openstack/neutron-54d868dd9-v7bwm" Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.283251 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5841244-b607-41b5-981c-1bb78b997411-public-tls-certs\") pod \"neutron-54d868dd9-v7bwm\" (UID: \"f5841244-b607-41b5-981c-1bb78b997411\") " pod="openstack/neutron-54d868dd9-v7bwm" Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.283381 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddx4j\" (UniqueName: \"kubernetes.io/projected/f5841244-b607-41b5-981c-1bb78b997411-kube-api-access-ddx4j\") pod \"neutron-54d868dd9-v7bwm\" (UID: \"f5841244-b607-41b5-981c-1bb78b997411\") " pod="openstack/neutron-54d868dd9-v7bwm" Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.283487 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5841244-b607-41b5-981c-1bb78b997411-internal-tls-certs\") pod \"neutron-54d868dd9-v7bwm\" (UID: \"f5841244-b607-41b5-981c-1bb78b997411\") " pod="openstack/neutron-54d868dd9-v7bwm" Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.283601 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f5841244-b607-41b5-981c-1bb78b997411-config\") pod \"neutron-54d868dd9-v7bwm\" (UID: \"f5841244-b607-41b5-981c-1bb78b997411\") " pod="openstack/neutron-54d868dd9-v7bwm" Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.283711 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f5841244-b607-41b5-981c-1bb78b997411-httpd-config\") pod \"neutron-54d868dd9-v7bwm\" (UID: \"f5841244-b607-41b5-981c-1bb78b997411\") " pod="openstack/neutron-54d868dd9-v7bwm" Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.283803 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5841244-b607-41b5-981c-1bb78b997411-combined-ca-bundle\") pod \"neutron-54d868dd9-v7bwm\" (UID: \"f5841244-b607-41b5-981c-1bb78b997411\") " pod="openstack/neutron-54d868dd9-v7bwm" Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.386956 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5841244-b607-41b5-981c-1bb78b997411-internal-tls-certs\") pod \"neutron-54d868dd9-v7bwm\" (UID: \"f5841244-b607-41b5-981c-1bb78b997411\") " pod="openstack/neutron-54d868dd9-v7bwm" Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.386992 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f5841244-b607-41b5-981c-1bb78b997411-config\") pod \"neutron-54d868dd9-v7bwm\" (UID: \"f5841244-b607-41b5-981c-1bb78b997411\") " pod="openstack/neutron-54d868dd9-v7bwm" Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.387033 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f5841244-b607-41b5-981c-1bb78b997411-httpd-config\") pod \"neutron-54d868dd9-v7bwm\" (UID: \"f5841244-b607-41b5-981c-1bb78b997411\") " pod="openstack/neutron-54d868dd9-v7bwm" Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.387065 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5841244-b607-41b5-981c-1bb78b997411-combined-ca-bundle\") pod \"neutron-54d868dd9-v7bwm\" (UID: \"f5841244-b607-41b5-981c-1bb78b997411\") " pod="openstack/neutron-54d868dd9-v7bwm" Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.387129 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5841244-b607-41b5-981c-1bb78b997411-ovndb-tls-certs\") pod \"neutron-54d868dd9-v7bwm\" (UID: \"f5841244-b607-41b5-981c-1bb78b997411\") " pod="openstack/neutron-54d868dd9-v7bwm" Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.387157 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5841244-b607-41b5-981c-1bb78b997411-public-tls-certs\") pod \"neutron-54d868dd9-v7bwm\" (UID: \"f5841244-b607-41b5-981c-1bb78b997411\") " pod="openstack/neutron-54d868dd9-v7bwm" Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.387196 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddx4j\" (UniqueName: \"kubernetes.io/projected/f5841244-b607-41b5-981c-1bb78b997411-kube-api-access-ddx4j\") pod \"neutron-54d868dd9-v7bwm\" (UID: \"f5841244-b607-41b5-981c-1bb78b997411\") " pod="openstack/neutron-54d868dd9-v7bwm" Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.396338 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5841244-b607-41b5-981c-1bb78b997411-internal-tls-certs\") pod \"neutron-54d868dd9-v7bwm\" (UID: \"f5841244-b607-41b5-981c-1bb78b997411\") " pod="openstack/neutron-54d868dd9-v7bwm" Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.398322 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f5841244-b607-41b5-981c-1bb78b997411-config\") pod \"neutron-54d868dd9-v7bwm\" (UID: \"f5841244-b607-41b5-981c-1bb78b997411\") " pod="openstack/neutron-54d868dd9-v7bwm" Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.399392 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5841244-b607-41b5-981c-1bb78b997411-combined-ca-bundle\") pod \"neutron-54d868dd9-v7bwm\" (UID: \"f5841244-b607-41b5-981c-1bb78b997411\") " pod="openstack/neutron-54d868dd9-v7bwm" Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.400522 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5841244-b607-41b5-981c-1bb78b997411-public-tls-certs\") pod \"neutron-54d868dd9-v7bwm\" (UID: \"f5841244-b607-41b5-981c-1bb78b997411\") " pod="openstack/neutron-54d868dd9-v7bwm" Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.403337 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f5841244-b607-41b5-981c-1bb78b997411-httpd-config\") pod \"neutron-54d868dd9-v7bwm\" (UID: \"f5841244-b607-41b5-981c-1bb78b997411\") " pod="openstack/neutron-54d868dd9-v7bwm" Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.405317 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5841244-b607-41b5-981c-1bb78b997411-ovndb-tls-certs\") pod \"neutron-54d868dd9-v7bwm\" (UID: \"f5841244-b607-41b5-981c-1bb78b997411\") " pod="openstack/neutron-54d868dd9-v7bwm" Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.414951 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddx4j\" (UniqueName: \"kubernetes.io/projected/f5841244-b607-41b5-981c-1bb78b997411-kube-api-access-ddx4j\") pod \"neutron-54d868dd9-v7bwm\" (UID: \"f5841244-b607-41b5-981c-1bb78b997411\") " pod="openstack/neutron-54d868dd9-v7bwm" Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.483083 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-668f5b9c84-9qvth"] Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.609334 4619 generic.go:334] "Generic (PLEG): container finished" podID="6b43b93b-1dc4-4498-b464-30609f8788c3" containerID="a4133e694ff84cac8215b31d214b2b572d80ad955195facd4b64de2c6d5185c6" exitCode=0 Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.609402 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f447695c7-n4pxf" event={"ID":"6b43b93b-1dc4-4498-b464-30609f8788c3","Type":"ContainerDied","Data":"a4133e694ff84cac8215b31d214b2b572d80ad955195facd4b64de2c6d5185c6"} Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.617225 4619 generic.go:334] "Generic (PLEG): container finished" podID="220bfbd9-0dca-427e-b78c-9acaf96f78b7" containerID="434cdbbad051dcd51a146c15dc2836035532f70bedab1f6b50a8707704d13a77" exitCode=143 Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.617299 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"220bfbd9-0dca-427e-b78c-9acaf96f78b7","Type":"ContainerDied","Data":"434cdbbad051dcd51a146c15dc2836035532f70bedab1f6b50a8707704d13a77"} Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.619067 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-668f5b9c84-9qvth" event={"ID":"16edc018-6152-42d9-aa2d-70de2c9851f3","Type":"ContainerStarted","Data":"d286bc428ade406c28c658a43a9cabfd6854c1e28dbf1ac2a2d4a11f8393b841"} Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.635256 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54d868dd9-v7bwm" Jan 26 11:13:58 crc kubenswrapper[4619]: I0126 11:13:58.777824 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:13:59 crc kubenswrapper[4619]: I0126 11:13:59.288822 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a59562e9-8459-4c22-a737-f6bde480fc2b" path="/var/lib/kubelet/pods/a59562e9-8459-4c22-a737-f6bde480fc2b/volumes" Jan 26 11:13:59 crc kubenswrapper[4619]: I0126 11:13:59.403040 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54d868dd9-v7bwm"] Jan 26 11:13:59 crc kubenswrapper[4619]: I0126 11:13:59.625897 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54d868dd9-v7bwm" event={"ID":"f5841244-b607-41b5-981c-1bb78b997411","Type":"ContainerStarted","Data":"d92416aa59d86426f1c0bf0d7bdd21b400eff7db819ccf4a0f03947a4c7b7980"} Jan 26 11:13:59 crc kubenswrapper[4619]: I0126 11:13:59.628435 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92c0c735-b411-4f87-9dcd-4cf4565ba828","Type":"ContainerStarted","Data":"53600d32b1e43b5d44703a6e6e71cd62794d28d56b3b2420ff77365e33ffd365"} Jan 26 11:13:59 crc kubenswrapper[4619]: I0126 11:13:59.630987 4619 generic.go:334] "Generic (PLEG): container finished" podID="220bfbd9-0dca-427e-b78c-9acaf96f78b7" containerID="02b8e9edf6a3f034c44a376424d90aa9fd3331a69e786d99483b263ef38858cb" exitCode=0 Jan 26 11:13:59 crc kubenswrapper[4619]: I0126 11:13:59.631073 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"220bfbd9-0dca-427e-b78c-9acaf96f78b7","Type":"ContainerDied","Data":"02b8e9edf6a3f034c44a376424d90aa9fd3331a69e786d99483b263ef38858cb"} Jan 26 11:13:59 crc kubenswrapper[4619]: I0126 11:13:59.632776 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-668f5b9c84-9qvth" event={"ID":"16edc018-6152-42d9-aa2d-70de2c9851f3","Type":"ContainerStarted","Data":"1be0d4cacf28fc29e4747bff6c680c833098ba9787a12a02e5bd2ad8ca0108f1"} Jan 26 11:13:59 crc kubenswrapper[4619]: I0126 11:13:59.632802 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-668f5b9c84-9qvth" event={"ID":"16edc018-6152-42d9-aa2d-70de2c9851f3","Type":"ContainerStarted","Data":"2d684d05051225465f546736c3c7fcb66ff3cc28f9324fe5318c0fa3cef642b3"} Jan 26 11:13:59 crc kubenswrapper[4619]: I0126 11:13:59.633602 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-668f5b9c84-9qvth" Jan 26 11:13:59 crc kubenswrapper[4619]: I0126 11:13:59.633649 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-668f5b9c84-9qvth" Jan 26 11:13:59 crc kubenswrapper[4619]: I0126 11:13:59.672890 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-668f5b9c84-9qvth" podStartSLOduration=2.672873171 podStartE2EDuration="2.672873171s" podCreationTimestamp="2026-01-26 11:13:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:13:59.669084246 +0000 UTC m=+1138.703124962" watchObservedRunningTime="2026-01-26 11:13:59.672873171 +0000 UTC m=+1138.706913877" Jan 26 11:14:00 crc kubenswrapper[4619]: I0126 11:14:00.295334 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6f447695c7-n4pxf" podUID="6b43b93b-1dc4-4498-b464-30609f8788c3" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.158:9696/\": dial tcp 10.217.0.158:9696: connect: connection refused" Jan 26 11:14:00 crc kubenswrapper[4619]: I0126 11:14:00.762266 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54d868dd9-v7bwm" event={"ID":"f5841244-b607-41b5-981c-1bb78b997411","Type":"ContainerStarted","Data":"e87922983f8845f584fb2c3a53506835135c706e6e6e1c77af2f9ed72bff3904"} Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.000539 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.089198 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/220bfbd9-0dca-427e-b78c-9acaf96f78b7-etc-machine-id\") pod \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\" (UID: \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\") " Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.089252 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/220bfbd9-0dca-427e-b78c-9acaf96f78b7-config-data-custom\") pod \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\" (UID: \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\") " Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.089295 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/220bfbd9-0dca-427e-b78c-9acaf96f78b7-scripts\") pod \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\" (UID: \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\") " Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.089328 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/220bfbd9-0dca-427e-b78c-9acaf96f78b7-config-data\") pod \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\" (UID: \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\") " Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.089409 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/220bfbd9-0dca-427e-b78c-9acaf96f78b7-logs\") pod \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\" (UID: \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\") " Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.089437 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220bfbd9-0dca-427e-b78c-9acaf96f78b7-combined-ca-bundle\") pod \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\" (UID: \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\") " Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.089491 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5k6b\" (UniqueName: \"kubernetes.io/projected/220bfbd9-0dca-427e-b78c-9acaf96f78b7-kube-api-access-k5k6b\") pod \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\" (UID: \"220bfbd9-0dca-427e-b78c-9acaf96f78b7\") " Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.089319 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/220bfbd9-0dca-427e-b78c-9acaf96f78b7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "220bfbd9-0dca-427e-b78c-9acaf96f78b7" (UID: "220bfbd9-0dca-427e-b78c-9acaf96f78b7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.090707 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/220bfbd9-0dca-427e-b78c-9acaf96f78b7-logs" (OuterVolumeSpecName: "logs") pod "220bfbd9-0dca-427e-b78c-9acaf96f78b7" (UID: "220bfbd9-0dca-427e-b78c-9acaf96f78b7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.110308 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/220bfbd9-0dca-427e-b78c-9acaf96f78b7-kube-api-access-k5k6b" (OuterVolumeSpecName: "kube-api-access-k5k6b") pod "220bfbd9-0dca-427e-b78c-9acaf96f78b7" (UID: "220bfbd9-0dca-427e-b78c-9acaf96f78b7"). InnerVolumeSpecName "kube-api-access-k5k6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.114379 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/220bfbd9-0dca-427e-b78c-9acaf96f78b7-scripts" (OuterVolumeSpecName: "scripts") pod "220bfbd9-0dca-427e-b78c-9acaf96f78b7" (UID: "220bfbd9-0dca-427e-b78c-9acaf96f78b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.114431 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/220bfbd9-0dca-427e-b78c-9acaf96f78b7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "220bfbd9-0dca-427e-b78c-9acaf96f78b7" (UID: "220bfbd9-0dca-427e-b78c-9acaf96f78b7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.162308 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/220bfbd9-0dca-427e-b78c-9acaf96f78b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "220bfbd9-0dca-427e-b78c-9acaf96f78b7" (UID: "220bfbd9-0dca-427e-b78c-9acaf96f78b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.191317 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5k6b\" (UniqueName: \"kubernetes.io/projected/220bfbd9-0dca-427e-b78c-9acaf96f78b7-kube-api-access-k5k6b\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.191348 4619 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/220bfbd9-0dca-427e-b78c-9acaf96f78b7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.191358 4619 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/220bfbd9-0dca-427e-b78c-9acaf96f78b7-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.191382 4619 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/220bfbd9-0dca-427e-b78c-9acaf96f78b7-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.191392 4619 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/220bfbd9-0dca-427e-b78c-9acaf96f78b7-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.191399 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220bfbd9-0dca-427e-b78c-9acaf96f78b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.234976 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/220bfbd9-0dca-427e-b78c-9acaf96f78b7-config-data" (OuterVolumeSpecName: "config-data") pod "220bfbd9-0dca-427e-b78c-9acaf96f78b7" (UID: "220bfbd9-0dca-427e-b78c-9acaf96f78b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.294470 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/220bfbd9-0dca-427e-b78c-9acaf96f78b7-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.768290 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"220bfbd9-0dca-427e-b78c-9acaf96f78b7","Type":"ContainerDied","Data":"f68f1d3f35faca092331e7cacbce10bcc6f67f378e77dd5e2d66ba57c3e14e4c"} Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.768344 4619 scope.go:117] "RemoveContainer" containerID="02b8e9edf6a3f034c44a376424d90aa9fd3331a69e786d99483b263ef38858cb" Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.768459 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.772708 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54d868dd9-v7bwm" event={"ID":"f5841244-b607-41b5-981c-1bb78b997411","Type":"ContainerStarted","Data":"7b481e752349c854e361dbb43ae7d9e1ac63c76acd87be13b6d90dac411d7a28"} Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.773184 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-54d868dd9-v7bwm" Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.775281 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92c0c735-b411-4f87-9dcd-4cf4565ba828","Type":"ContainerStarted","Data":"81ee215a36a20ef418ea92d25f84a906b42a936638f2f1ffbabacde241c5bd7d"} Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.775304 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92c0c735-b411-4f87-9dcd-4cf4565ba828","Type":"ContainerStarted","Data":"54a926ce22438ccc8211f91636d4a6d6ca5eae6b7db0ebe9f5ef47864e4ad17c"} Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.790378 4619 scope.go:117] "RemoveContainer" containerID="434cdbbad051dcd51a146c15dc2836035532f70bedab1f6b50a8707704d13a77" Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.798668 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.812213 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.812374 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-54d868dd9-v7bwm" podStartSLOduration=4.812364159 podStartE2EDuration="4.812364159s" podCreationTimestamp="2026-01-26 11:13:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:14:01.807086602 +0000 UTC m=+1140.841127318" watchObservedRunningTime="2026-01-26 11:14:01.812364159 +0000 UTC m=+1140.846404875" Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.836089 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 26 11:14:01 crc kubenswrapper[4619]: E0126 11:14:01.836754 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="220bfbd9-0dca-427e-b78c-9acaf96f78b7" containerName="cinder-api-log" Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.836782 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="220bfbd9-0dca-427e-b78c-9acaf96f78b7" containerName="cinder-api-log" Jan 26 11:14:01 crc kubenswrapper[4619]: E0126 11:14:01.836829 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="220bfbd9-0dca-427e-b78c-9acaf96f78b7" containerName="cinder-api" Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.836842 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="220bfbd9-0dca-427e-b78c-9acaf96f78b7" containerName="cinder-api" Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.837142 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="220bfbd9-0dca-427e-b78c-9acaf96f78b7" containerName="cinder-api-log" Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.837202 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="220bfbd9-0dca-427e-b78c-9acaf96f78b7" containerName="cinder-api" Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.838737 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.841915 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.842506 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.843011 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 26 11:14:01 crc kubenswrapper[4619]: I0126 11:14:01.854029 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.011568 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18a1159e-53c5-4f13-9b4d-c6912b11fe46-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"18a1159e-53c5-4f13-9b4d-c6912b11fe46\") " pod="openstack/cinder-api-0" Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.011925 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18a1159e-53c5-4f13-9b4d-c6912b11fe46-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"18a1159e-53c5-4f13-9b4d-c6912b11fe46\") " pod="openstack/cinder-api-0" Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.011957 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/18a1159e-53c5-4f13-9b4d-c6912b11fe46-config-data-custom\") pod \"cinder-api-0\" (UID: \"18a1159e-53c5-4f13-9b4d-c6912b11fe46\") " pod="openstack/cinder-api-0" Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.012013 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18a1159e-53c5-4f13-9b4d-c6912b11fe46-logs\") pod \"cinder-api-0\" (UID: \"18a1159e-53c5-4f13-9b4d-c6912b11fe46\") " pod="openstack/cinder-api-0" Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.012038 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18a1159e-53c5-4f13-9b4d-c6912b11fe46-config-data\") pod \"cinder-api-0\" (UID: \"18a1159e-53c5-4f13-9b4d-c6912b11fe46\") " pod="openstack/cinder-api-0" Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.012057 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpvn2\" (UniqueName: \"kubernetes.io/projected/18a1159e-53c5-4f13-9b4d-c6912b11fe46-kube-api-access-cpvn2\") pod \"cinder-api-0\" (UID: \"18a1159e-53c5-4f13-9b4d-c6912b11fe46\") " pod="openstack/cinder-api-0" Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.012079 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18a1159e-53c5-4f13-9b4d-c6912b11fe46-scripts\") pod \"cinder-api-0\" (UID: \"18a1159e-53c5-4f13-9b4d-c6912b11fe46\") " pod="openstack/cinder-api-0" Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.012096 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/18a1159e-53c5-4f13-9b4d-c6912b11fe46-etc-machine-id\") pod \"cinder-api-0\" (UID: \"18a1159e-53c5-4f13-9b4d-c6912b11fe46\") " pod="openstack/cinder-api-0" Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.012118 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18a1159e-53c5-4f13-9b4d-c6912b11fe46-public-tls-certs\") pod \"cinder-api-0\" (UID: \"18a1159e-53c5-4f13-9b4d-c6912b11fe46\") " pod="openstack/cinder-api-0" Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.113206 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18a1159e-53c5-4f13-9b4d-c6912b11fe46-scripts\") pod \"cinder-api-0\" (UID: \"18a1159e-53c5-4f13-9b4d-c6912b11fe46\") " pod="openstack/cinder-api-0" Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.113266 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/18a1159e-53c5-4f13-9b4d-c6912b11fe46-etc-machine-id\") pod \"cinder-api-0\" (UID: \"18a1159e-53c5-4f13-9b4d-c6912b11fe46\") " pod="openstack/cinder-api-0" Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.113294 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18a1159e-53c5-4f13-9b4d-c6912b11fe46-public-tls-certs\") pod \"cinder-api-0\" (UID: \"18a1159e-53c5-4f13-9b4d-c6912b11fe46\") " pod="openstack/cinder-api-0" Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.113380 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18a1159e-53c5-4f13-9b4d-c6912b11fe46-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"18a1159e-53c5-4f13-9b4d-c6912b11fe46\") " pod="openstack/cinder-api-0" Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.113418 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18a1159e-53c5-4f13-9b4d-c6912b11fe46-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"18a1159e-53c5-4f13-9b4d-c6912b11fe46\") " pod="openstack/cinder-api-0" Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.113450 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/18a1159e-53c5-4f13-9b4d-c6912b11fe46-config-data-custom\") pod \"cinder-api-0\" (UID: \"18a1159e-53c5-4f13-9b4d-c6912b11fe46\") " pod="openstack/cinder-api-0" Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.113515 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18a1159e-53c5-4f13-9b4d-c6912b11fe46-logs\") pod \"cinder-api-0\" (UID: \"18a1159e-53c5-4f13-9b4d-c6912b11fe46\") " pod="openstack/cinder-api-0" Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.113543 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18a1159e-53c5-4f13-9b4d-c6912b11fe46-config-data\") pod \"cinder-api-0\" (UID: \"18a1159e-53c5-4f13-9b4d-c6912b11fe46\") " pod="openstack/cinder-api-0" Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.113563 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpvn2\" (UniqueName: \"kubernetes.io/projected/18a1159e-53c5-4f13-9b4d-c6912b11fe46-kube-api-access-cpvn2\") pod \"cinder-api-0\" (UID: \"18a1159e-53c5-4f13-9b4d-c6912b11fe46\") " pod="openstack/cinder-api-0" Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.114517 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/18a1159e-53c5-4f13-9b4d-c6912b11fe46-etc-machine-id\") pod \"cinder-api-0\" (UID: \"18a1159e-53c5-4f13-9b4d-c6912b11fe46\") " pod="openstack/cinder-api-0" Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.127756 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18a1159e-53c5-4f13-9b4d-c6912b11fe46-logs\") pod \"cinder-api-0\" (UID: \"18a1159e-53c5-4f13-9b4d-c6912b11fe46\") " pod="openstack/cinder-api-0" Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.128735 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18a1159e-53c5-4f13-9b4d-c6912b11fe46-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"18a1159e-53c5-4f13-9b4d-c6912b11fe46\") " pod="openstack/cinder-api-0" Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.130484 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18a1159e-53c5-4f13-9b4d-c6912b11fe46-scripts\") pod \"cinder-api-0\" (UID: \"18a1159e-53c5-4f13-9b4d-c6912b11fe46\") " pod="openstack/cinder-api-0" Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.138102 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18a1159e-53c5-4f13-9b4d-c6912b11fe46-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"18a1159e-53c5-4f13-9b4d-c6912b11fe46\") " pod="openstack/cinder-api-0" Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.152185 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpvn2\" (UniqueName: \"kubernetes.io/projected/18a1159e-53c5-4f13-9b4d-c6912b11fe46-kube-api-access-cpvn2\") pod \"cinder-api-0\" (UID: \"18a1159e-53c5-4f13-9b4d-c6912b11fe46\") " pod="openstack/cinder-api-0" Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.152189 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/18a1159e-53c5-4f13-9b4d-c6912b11fe46-config-data-custom\") pod \"cinder-api-0\" (UID: \"18a1159e-53c5-4f13-9b4d-c6912b11fe46\") " pod="openstack/cinder-api-0" Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.153155 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18a1159e-53c5-4f13-9b4d-c6912b11fe46-config-data\") pod \"cinder-api-0\" (UID: \"18a1159e-53c5-4f13-9b4d-c6912b11fe46\") " pod="openstack/cinder-api-0" Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.154099 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18a1159e-53c5-4f13-9b4d-c6912b11fe46-public-tls-certs\") pod \"cinder-api-0\" (UID: \"18a1159e-53c5-4f13-9b4d-c6912b11fe46\") " pod="openstack/cinder-api-0" Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.414403 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.453039 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.574760 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-wkmcr" Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.650302 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-dqf8s"] Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.650607 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" podUID="e5d7d811-380d-4e16-b6a6-03e240671a70" containerName="dnsmasq-dns" containerID="cri-o://2d82e4bedd98589587b61f6251c5fcb80eb42e0139c15088f91fe8151b004229" gracePeriod=10 Jan 26 11:14:02 crc kubenswrapper[4619]: I0126 11:14:02.844750 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92c0c735-b411-4f87-9dcd-4cf4565ba828","Type":"ContainerStarted","Data":"292987db39c6d666bcd3e8631603d5bffb0eb8b0dc4a2f180ea3ac43034ee4bb"} Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.170480 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.314765 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="220bfbd9-0dca-427e-b78c-9acaf96f78b7" path="/var/lib/kubelet/pods/220bfbd9-0dca-427e-b78c-9acaf96f78b7/volumes" Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.489725 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.562194 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5d7d811-380d-4e16-b6a6-03e240671a70-ovsdbserver-nb\") pod \"e5d7d811-380d-4e16-b6a6-03e240671a70\" (UID: \"e5d7d811-380d-4e16-b6a6-03e240671a70\") " Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.562748 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5d7d811-380d-4e16-b6a6-03e240671a70-dns-swift-storage-0\") pod \"e5d7d811-380d-4e16-b6a6-03e240671a70\" (UID: \"e5d7d811-380d-4e16-b6a6-03e240671a70\") " Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.562905 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgkmf\" (UniqueName: \"kubernetes.io/projected/e5d7d811-380d-4e16-b6a6-03e240671a70-kube-api-access-wgkmf\") pod \"e5d7d811-380d-4e16-b6a6-03e240671a70\" (UID: \"e5d7d811-380d-4e16-b6a6-03e240671a70\") " Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.562952 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5d7d811-380d-4e16-b6a6-03e240671a70-dns-svc\") pod \"e5d7d811-380d-4e16-b6a6-03e240671a70\" (UID: \"e5d7d811-380d-4e16-b6a6-03e240671a70\") " Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.562979 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5d7d811-380d-4e16-b6a6-03e240671a70-ovsdbserver-sb\") pod \"e5d7d811-380d-4e16-b6a6-03e240671a70\" (UID: \"e5d7d811-380d-4e16-b6a6-03e240671a70\") " Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.563047 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d7d811-380d-4e16-b6a6-03e240671a70-config\") pod \"e5d7d811-380d-4e16-b6a6-03e240671a70\" (UID: \"e5d7d811-380d-4e16-b6a6-03e240671a70\") " Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.622683 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5d7d811-380d-4e16-b6a6-03e240671a70-kube-api-access-wgkmf" (OuterVolumeSpecName: "kube-api-access-wgkmf") pod "e5d7d811-380d-4e16-b6a6-03e240671a70" (UID: "e5d7d811-380d-4e16-b6a6-03e240671a70"). InnerVolumeSpecName "kube-api-access-wgkmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.666135 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgkmf\" (UniqueName: \"kubernetes.io/projected/e5d7d811-380d-4e16-b6a6-03e240671a70-kube-api-access-wgkmf\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.712736 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5d7d811-380d-4e16-b6a6-03e240671a70-config" (OuterVolumeSpecName: "config") pod "e5d7d811-380d-4e16-b6a6-03e240671a70" (UID: "e5d7d811-380d-4e16-b6a6-03e240671a70"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.715149 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5d7d811-380d-4e16-b6a6-03e240671a70-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e5d7d811-380d-4e16-b6a6-03e240671a70" (UID: "e5d7d811-380d-4e16-b6a6-03e240671a70"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.734155 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5d7d811-380d-4e16-b6a6-03e240671a70-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e5d7d811-380d-4e16-b6a6-03e240671a70" (UID: "e5d7d811-380d-4e16-b6a6-03e240671a70"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.735596 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5d7d811-380d-4e16-b6a6-03e240671a70-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e5d7d811-380d-4e16-b6a6-03e240671a70" (UID: "e5d7d811-380d-4e16-b6a6-03e240671a70"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.748272 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5d7d811-380d-4e16-b6a6-03e240671a70-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e5d7d811-380d-4e16-b6a6-03e240671a70" (UID: "e5d7d811-380d-4e16-b6a6-03e240671a70"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.778821 4619 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5d7d811-380d-4e16-b6a6-03e240671a70-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.778852 4619 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5d7d811-380d-4e16-b6a6-03e240671a70-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.778862 4619 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5d7d811-380d-4e16-b6a6-03e240671a70-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.778870 4619 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5d7d811-380d-4e16-b6a6-03e240671a70-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.778879 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d7d811-380d-4e16-b6a6-03e240671a70-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.878631 4619 generic.go:334] "Generic (PLEG): container finished" podID="6b43b93b-1dc4-4498-b464-30609f8788c3" containerID="499daccc20383486ab3bbd336b1ebd266ea8d7366c1d923903ebeb85b451bdce" exitCode=0 Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.878697 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f447695c7-n4pxf" event={"ID":"6b43b93b-1dc4-4498-b464-30609f8788c3","Type":"ContainerDied","Data":"499daccc20383486ab3bbd336b1ebd266ea8d7366c1d923903ebeb85b451bdce"} Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.884431 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"18a1159e-53c5-4f13-9b4d-c6912b11fe46","Type":"ContainerStarted","Data":"27bde46792af113e20a160d2cd0aade9832822c154ea5955f60b50f1e371e317"} Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.892806 4619 generic.go:334] "Generic (PLEG): container finished" podID="e5d7d811-380d-4e16-b6a6-03e240671a70" containerID="2d82e4bedd98589587b61f6251c5fcb80eb42e0139c15088f91fe8151b004229" exitCode=0 Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.892849 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" event={"ID":"e5d7d811-380d-4e16-b6a6-03e240671a70","Type":"ContainerDied","Data":"2d82e4bedd98589587b61f6251c5fcb80eb42e0139c15088f91fe8151b004229"} Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.892897 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" event={"ID":"e5d7d811-380d-4e16-b6a6-03e240671a70","Type":"ContainerDied","Data":"a02dd9504fb6dab2f94f59a6ba9375fac411f2b58199da19866144c97ee4d718"} Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.892916 4619 scope.go:117] "RemoveContainer" containerID="2d82e4bedd98589587b61f6251c5fcb80eb42e0139c15088f91fe8151b004229" Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.892938 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-dqf8s" Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.943056 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-dqf8s"] Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.955998 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-dqf8s"] Jan 26 11:14:03 crc kubenswrapper[4619]: I0126 11:14:03.977087 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.015371 4619 scope.go:117] "RemoveContainer" containerID="7ddbabd5324293b76155928e1859c212f114d69680ad4666eec29df55ab261bf" Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.049362 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.103002 4619 scope.go:117] "RemoveContainer" containerID="2d82e4bedd98589587b61f6251c5fcb80eb42e0139c15088f91fe8151b004229" Jan 26 11:14:04 crc kubenswrapper[4619]: E0126 11:14:04.106260 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d82e4bedd98589587b61f6251c5fcb80eb42e0139c15088f91fe8151b004229\": container with ID starting with 2d82e4bedd98589587b61f6251c5fcb80eb42e0139c15088f91fe8151b004229 not found: ID does not exist" containerID="2d82e4bedd98589587b61f6251c5fcb80eb42e0139c15088f91fe8151b004229" Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.106298 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d82e4bedd98589587b61f6251c5fcb80eb42e0139c15088f91fe8151b004229"} err="failed to get container status \"2d82e4bedd98589587b61f6251c5fcb80eb42e0139c15088f91fe8151b004229\": rpc error: code = NotFound desc = could not find container \"2d82e4bedd98589587b61f6251c5fcb80eb42e0139c15088f91fe8151b004229\": container with ID starting with 2d82e4bedd98589587b61f6251c5fcb80eb42e0139c15088f91fe8151b004229 not found: ID does not exist" Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.106319 4619 scope.go:117] "RemoveContainer" containerID="7ddbabd5324293b76155928e1859c212f114d69680ad4666eec29df55ab261bf" Jan 26 11:14:04 crc kubenswrapper[4619]: E0126 11:14:04.112717 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ddbabd5324293b76155928e1859c212f114d69680ad4666eec29df55ab261bf\": container with ID starting with 7ddbabd5324293b76155928e1859c212f114d69680ad4666eec29df55ab261bf not found: ID does not exist" containerID="7ddbabd5324293b76155928e1859c212f114d69680ad4666eec29df55ab261bf" Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.112752 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ddbabd5324293b76155928e1859c212f114d69680ad4666eec29df55ab261bf"} err="failed to get container status \"7ddbabd5324293b76155928e1859c212f114d69680ad4666eec29df55ab261bf\": rpc error: code = NotFound desc = could not find container \"7ddbabd5324293b76155928e1859c212f114d69680ad4666eec29df55ab261bf\": container with ID starting with 7ddbabd5324293b76155928e1859c212f114d69680ad4666eec29df55ab261bf not found: ID does not exist" Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.544489 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f447695c7-n4pxf" Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.702224 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-combined-ca-bundle\") pod \"6b43b93b-1dc4-4498-b464-30609f8788c3\" (UID: \"6b43b93b-1dc4-4498-b464-30609f8788c3\") " Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.702352 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-internal-tls-certs\") pod \"6b43b93b-1dc4-4498-b464-30609f8788c3\" (UID: \"6b43b93b-1dc4-4498-b464-30609f8788c3\") " Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.702389 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-config\") pod \"6b43b93b-1dc4-4498-b464-30609f8788c3\" (UID: \"6b43b93b-1dc4-4498-b464-30609f8788c3\") " Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.702451 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-public-tls-certs\") pod \"6b43b93b-1dc4-4498-b464-30609f8788c3\" (UID: \"6b43b93b-1dc4-4498-b464-30609f8788c3\") " Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.702475 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-httpd-config\") pod \"6b43b93b-1dc4-4498-b464-30609f8788c3\" (UID: \"6b43b93b-1dc4-4498-b464-30609f8788c3\") " Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.702501 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npm9v\" (UniqueName: \"kubernetes.io/projected/6b43b93b-1dc4-4498-b464-30609f8788c3-kube-api-access-npm9v\") pod \"6b43b93b-1dc4-4498-b464-30609f8788c3\" (UID: \"6b43b93b-1dc4-4498-b464-30609f8788c3\") " Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.702587 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-ovndb-tls-certs\") pod \"6b43b93b-1dc4-4498-b464-30609f8788c3\" (UID: \"6b43b93b-1dc4-4498-b464-30609f8788c3\") " Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.710727 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-b7745f6db-mwnfn" podUID="6f6c5e4d-17c2-45e9-add1-0b35026ba69d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.711075 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-b7745f6db-mwnfn" podUID="6f6c5e4d-17c2-45e9-add1-0b35026ba69d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.717806 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b43b93b-1dc4-4498-b464-30609f8788c3-kube-api-access-npm9v" (OuterVolumeSpecName: "kube-api-access-npm9v") pod "6b43b93b-1dc4-4498-b464-30609f8788c3" (UID: "6b43b93b-1dc4-4498-b464-30609f8788c3"). InnerVolumeSpecName "kube-api-access-npm9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.720798 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "6b43b93b-1dc4-4498-b464-30609f8788c3" (UID: "6b43b93b-1dc4-4498-b464-30609f8788c3"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.803812 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-config" (OuterVolumeSpecName: "config") pod "6b43b93b-1dc4-4498-b464-30609f8788c3" (UID: "6b43b93b-1dc4-4498-b464-30609f8788c3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.804961 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.804998 4619 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.805013 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npm9v\" (UniqueName: \"kubernetes.io/projected/6b43b93b-1dc4-4498-b464-30609f8788c3-kube-api-access-npm9v\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.822754 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b43b93b-1dc4-4498-b464-30609f8788c3" (UID: "6b43b93b-1dc4-4498-b464-30609f8788c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.824730 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6b43b93b-1dc4-4498-b464-30609f8788c3" (UID: "6b43b93b-1dc4-4498-b464-30609f8788c3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.826791 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6b43b93b-1dc4-4498-b464-30609f8788c3" (UID: "6b43b93b-1dc4-4498-b464-30609f8788c3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.850890 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "6b43b93b-1dc4-4498-b464-30609f8788c3" (UID: "6b43b93b-1dc4-4498-b464-30609f8788c3"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.906735 4619 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.906765 4619 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.906773 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.906784 4619 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b43b93b-1dc4-4498-b464-30609f8788c3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.926138 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f447695c7-n4pxf" event={"ID":"6b43b93b-1dc4-4498-b464-30609f8788c3","Type":"ContainerDied","Data":"838fbd2d77860d0edc5a5c4a75081b4b8e8dc68d7a8c941401e4462ad4461361"} Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.926193 4619 scope.go:117] "RemoveContainer" containerID="a4133e694ff84cac8215b31d214b2b572d80ad955195facd4b64de2c6d5185c6" Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.926552 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f447695c7-n4pxf" Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.942778 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92c0c735-b411-4f87-9dcd-4cf4565ba828","Type":"ContainerStarted","Data":"c89f58dfa2810e46f9f747a195373f2f76c4c29b7503149c7927282f24d5d08b"} Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.943756 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.964075 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"18a1159e-53c5-4f13-9b4d-c6912b11fe46","Type":"ContainerStarted","Data":"4b390115ecb37f89477a0afd2dd32ecf4600301d60a6abc3e7bc2a84d1f4bf6a"} Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.964230 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="1f4725e4-fd9d-49c8-b4a4-04d9f855f285" containerName="cinder-scheduler" containerID="cri-o://0b91a7eac347326752ee76e49847b194f622429e2c3ca40b2c4d70df6805bf72" gracePeriod=30 Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.964258 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="1f4725e4-fd9d-49c8-b4a4-04d9f855f285" containerName="probe" containerID="cri-o://cad175bd2369560127c601ec5540274412d35237d7a2b27d95d9f52dc88fedce" gracePeriod=30 Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.973351 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.056987552 podStartE2EDuration="7.973336684s" podCreationTimestamp="2026-01-26 11:13:57 +0000 UTC" firstStartedPulling="2026-01-26 11:13:58.831823672 +0000 UTC m=+1137.865864388" lastFinishedPulling="2026-01-26 11:14:03.748172804 +0000 UTC m=+1142.782213520" observedRunningTime="2026-01-26 11:14:04.970747903 +0000 UTC m=+1144.004788619" watchObservedRunningTime="2026-01-26 11:14:04.973336684 +0000 UTC m=+1144.007377400" Jan 26 11:14:04 crc kubenswrapper[4619]: I0126 11:14:04.984958 4619 scope.go:117] "RemoveContainer" containerID="499daccc20383486ab3bbd336b1ebd266ea8d7366c1d923903ebeb85b451bdce" Jan 26 11:14:05 crc kubenswrapper[4619]: I0126 11:14:05.015779 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6f447695c7-n4pxf"] Jan 26 11:14:05 crc kubenswrapper[4619]: I0126 11:14:05.025757 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6f447695c7-n4pxf"] Jan 26 11:14:05 crc kubenswrapper[4619]: I0126 11:14:05.290545 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b43b93b-1dc4-4498-b464-30609f8788c3" path="/var/lib/kubelet/pods/6b43b93b-1dc4-4498-b464-30609f8788c3/volumes" Jan 26 11:14:05 crc kubenswrapper[4619]: I0126 11:14:05.291884 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5d7d811-380d-4e16-b6a6-03e240671a70" path="/var/lib/kubelet/pods/e5d7d811-380d-4e16-b6a6-03e240671a70/volumes" Jan 26 11:14:05 crc kubenswrapper[4619]: I0126 11:14:05.710362 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b7745f6db-mwnfn" podUID="6f6c5e4d-17c2-45e9-add1-0b35026ba69d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 11:14:05 crc kubenswrapper[4619]: I0126 11:14:05.711003 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b7745f6db-mwnfn" podUID="6f6c5e4d-17c2-45e9-add1-0b35026ba69d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 11:14:05 crc kubenswrapper[4619]: I0126 11:14:05.976357 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"18a1159e-53c5-4f13-9b4d-c6912b11fe46","Type":"ContainerStarted","Data":"213b5caea7e0bd4df4afb2a6cf3c2c11dd450f358ed46a7cf09221b61f427c6c"} Jan 26 11:14:05 crc kubenswrapper[4619]: I0126 11:14:05.976522 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 26 11:14:05 crc kubenswrapper[4619]: I0126 11:14:05.997451 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.997435124 podStartE2EDuration="4.997435124s" podCreationTimestamp="2026-01-26 11:14:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:14:05.99620941 +0000 UTC m=+1145.030250126" watchObservedRunningTime="2026-01-26 11:14:05.997435124 +0000 UTC m=+1145.031475860" Jan 26 11:14:06 crc kubenswrapper[4619]: I0126 11:14:06.489900 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-846c8954d4-fg4cj" Jan 26 11:14:06 crc kubenswrapper[4619]: I0126 11:14:06.629093 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-846c8954d4-fg4cj" Jan 26 11:14:06 crc kubenswrapper[4619]: I0126 11:14:06.983891 4619 generic.go:334] "Generic (PLEG): container finished" podID="1f4725e4-fd9d-49c8-b4a4-04d9f855f285" containerID="cad175bd2369560127c601ec5540274412d35237d7a2b27d95d9f52dc88fedce" exitCode=0 Jan 26 11:14:06 crc kubenswrapper[4619]: I0126 11:14:06.984736 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1f4725e4-fd9d-49c8-b4a4-04d9f855f285","Type":"ContainerDied","Data":"cad175bd2369560127c601ec5540274412d35237d7a2b27d95d9f52dc88fedce"} Jan 26 11:14:07 crc kubenswrapper[4619]: I0126 11:14:07.459544 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 11:14:07 crc kubenswrapper[4619]: I0126 11:14:07.584389 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-config-data\") pod \"1f4725e4-fd9d-49c8-b4a4-04d9f855f285\" (UID: \"1f4725e4-fd9d-49c8-b4a4-04d9f855f285\") " Jan 26 11:14:07 crc kubenswrapper[4619]: I0126 11:14:07.584681 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-etc-machine-id\") pod \"1f4725e4-fd9d-49c8-b4a4-04d9f855f285\" (UID: \"1f4725e4-fd9d-49c8-b4a4-04d9f855f285\") " Jan 26 11:14:07 crc kubenswrapper[4619]: I0126 11:14:07.584753 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1f4725e4-fd9d-49c8-b4a4-04d9f855f285" (UID: "1f4725e4-fd9d-49c8-b4a4-04d9f855f285"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:14:07 crc kubenswrapper[4619]: I0126 11:14:07.584773 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdqmc\" (UniqueName: \"kubernetes.io/projected/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-kube-api-access-hdqmc\") pod \"1f4725e4-fd9d-49c8-b4a4-04d9f855f285\" (UID: \"1f4725e4-fd9d-49c8-b4a4-04d9f855f285\") " Jan 26 11:14:07 crc kubenswrapper[4619]: I0126 11:14:07.584965 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-combined-ca-bundle\") pod \"1f4725e4-fd9d-49c8-b4a4-04d9f855f285\" (UID: \"1f4725e4-fd9d-49c8-b4a4-04d9f855f285\") " Jan 26 11:14:07 crc kubenswrapper[4619]: I0126 11:14:07.585038 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-config-data-custom\") pod \"1f4725e4-fd9d-49c8-b4a4-04d9f855f285\" (UID: \"1f4725e4-fd9d-49c8-b4a4-04d9f855f285\") " Jan 26 11:14:07 crc kubenswrapper[4619]: I0126 11:14:07.585084 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-scripts\") pod \"1f4725e4-fd9d-49c8-b4a4-04d9f855f285\" (UID: \"1f4725e4-fd9d-49c8-b4a4-04d9f855f285\") " Jan 26 11:14:07 crc kubenswrapper[4619]: I0126 11:14:07.585878 4619 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:07 crc kubenswrapper[4619]: I0126 11:14:07.592132 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-kube-api-access-hdqmc" (OuterVolumeSpecName: "kube-api-access-hdqmc") pod "1f4725e4-fd9d-49c8-b4a4-04d9f855f285" (UID: "1f4725e4-fd9d-49c8-b4a4-04d9f855f285"). InnerVolumeSpecName "kube-api-access-hdqmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:14:07 crc kubenswrapper[4619]: I0126 11:14:07.594081 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1f4725e4-fd9d-49c8-b4a4-04d9f855f285" (UID: "1f4725e4-fd9d-49c8-b4a4-04d9f855f285"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:07 crc kubenswrapper[4619]: I0126 11:14:07.609395 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-scripts" (OuterVolumeSpecName: "scripts") pod "1f4725e4-fd9d-49c8-b4a4-04d9f855f285" (UID: "1f4725e4-fd9d-49c8-b4a4-04d9f855f285"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:07 crc kubenswrapper[4619]: I0126 11:14:07.656528 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f4725e4-fd9d-49c8-b4a4-04d9f855f285" (UID: "1f4725e4-fd9d-49c8-b4a4-04d9f855f285"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:07 crc kubenswrapper[4619]: I0126 11:14:07.689103 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:07 crc kubenswrapper[4619]: I0126 11:14:07.689129 4619 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:07 crc kubenswrapper[4619]: I0126 11:14:07.689141 4619 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:07 crc kubenswrapper[4619]: I0126 11:14:07.689150 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdqmc\" (UniqueName: \"kubernetes.io/projected/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-kube-api-access-hdqmc\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:07 crc kubenswrapper[4619]: I0126 11:14:07.816333 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-config-data" (OuterVolumeSpecName: "config-data") pod "1f4725e4-fd9d-49c8-b4a4-04d9f855f285" (UID: "1f4725e4-fd9d-49c8-b4a4-04d9f855f285"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:07 crc kubenswrapper[4619]: I0126 11:14:07.893508 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f4725e4-fd9d-49c8-b4a4-04d9f855f285-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:07 crc kubenswrapper[4619]: I0126 11:14:07.998801 4619 generic.go:334] "Generic (PLEG): container finished" podID="1f4725e4-fd9d-49c8-b4a4-04d9f855f285" containerID="0b91a7eac347326752ee76e49847b194f622429e2c3ca40b2c4d70df6805bf72" exitCode=0 Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:07.999318 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:07.999751 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1f4725e4-fd9d-49c8-b4a4-04d9f855f285","Type":"ContainerDied","Data":"0b91a7eac347326752ee76e49847b194f622429e2c3ca40b2c4d70df6805bf72"} Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:07.999779 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1f4725e4-fd9d-49c8-b4a4-04d9f855f285","Type":"ContainerDied","Data":"dfdbb7e52381cccd9fc0a3f20ffe8c68228b1f3fb40b3ab6bf5a87f80acb5143"} Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:07.999812 4619 scope.go:117] "RemoveContainer" containerID="cad175bd2369560127c601ec5540274412d35237d7a2b27d95d9f52dc88fedce" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.036485 4619 scope.go:117] "RemoveContainer" containerID="0b91a7eac347326752ee76e49847b194f622429e2c3ca40b2c4d70df6805bf72" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.038153 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.047733 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.065833 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 11:14:08 crc kubenswrapper[4619]: E0126 11:14:08.066260 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d7d811-380d-4e16-b6a6-03e240671a70" containerName="init" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.066280 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d7d811-380d-4e16-b6a6-03e240671a70" containerName="init" Jan 26 11:14:08 crc kubenswrapper[4619]: E0126 11:14:08.066304 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d7d811-380d-4e16-b6a6-03e240671a70" containerName="dnsmasq-dns" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.066311 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d7d811-380d-4e16-b6a6-03e240671a70" containerName="dnsmasq-dns" Jan 26 11:14:08 crc kubenswrapper[4619]: E0126 11:14:08.066318 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f4725e4-fd9d-49c8-b4a4-04d9f855f285" containerName="probe" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.066326 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f4725e4-fd9d-49c8-b4a4-04d9f855f285" containerName="probe" Jan 26 11:14:08 crc kubenswrapper[4619]: E0126 11:14:08.066335 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b43b93b-1dc4-4498-b464-30609f8788c3" containerName="neutron-httpd" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.066341 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b43b93b-1dc4-4498-b464-30609f8788c3" containerName="neutron-httpd" Jan 26 11:14:08 crc kubenswrapper[4619]: E0126 11:14:08.066352 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f4725e4-fd9d-49c8-b4a4-04d9f855f285" containerName="cinder-scheduler" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.066358 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f4725e4-fd9d-49c8-b4a4-04d9f855f285" containerName="cinder-scheduler" Jan 26 11:14:08 crc kubenswrapper[4619]: E0126 11:14:08.066376 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b43b93b-1dc4-4498-b464-30609f8788c3" containerName="neutron-api" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.066382 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b43b93b-1dc4-4498-b464-30609f8788c3" containerName="neutron-api" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.066552 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b43b93b-1dc4-4498-b464-30609f8788c3" containerName="neutron-httpd" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.066566 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5d7d811-380d-4e16-b6a6-03e240671a70" containerName="dnsmasq-dns" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.066574 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f4725e4-fd9d-49c8-b4a4-04d9f855f285" containerName="probe" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.066593 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b43b93b-1dc4-4498-b464-30609f8788c3" containerName="neutron-api" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.066601 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f4725e4-fd9d-49c8-b4a4-04d9f855f285" containerName="cinder-scheduler" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.067510 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.071381 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.095672 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.106804 4619 scope.go:117] "RemoveContainer" containerID="cad175bd2369560127c601ec5540274412d35237d7a2b27d95d9f52dc88fedce" Jan 26 11:14:08 crc kubenswrapper[4619]: E0126 11:14:08.120040 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cad175bd2369560127c601ec5540274412d35237d7a2b27d95d9f52dc88fedce\": container with ID starting with cad175bd2369560127c601ec5540274412d35237d7a2b27d95d9f52dc88fedce not found: ID does not exist" containerID="cad175bd2369560127c601ec5540274412d35237d7a2b27d95d9f52dc88fedce" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.120083 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cad175bd2369560127c601ec5540274412d35237d7a2b27d95d9f52dc88fedce"} err="failed to get container status \"cad175bd2369560127c601ec5540274412d35237d7a2b27d95d9f52dc88fedce\": rpc error: code = NotFound desc = could not find container \"cad175bd2369560127c601ec5540274412d35237d7a2b27d95d9f52dc88fedce\": container with ID starting with cad175bd2369560127c601ec5540274412d35237d7a2b27d95d9f52dc88fedce not found: ID does not exist" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.120135 4619 scope.go:117] "RemoveContainer" containerID="0b91a7eac347326752ee76e49847b194f622429e2c3ca40b2c4d70df6805bf72" Jan 26 11:14:08 crc kubenswrapper[4619]: E0126 11:14:08.125780 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b91a7eac347326752ee76e49847b194f622429e2c3ca40b2c4d70df6805bf72\": container with ID starting with 0b91a7eac347326752ee76e49847b194f622429e2c3ca40b2c4d70df6805bf72 not found: ID does not exist" containerID="0b91a7eac347326752ee76e49847b194f622429e2c3ca40b2c4d70df6805bf72" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.125840 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b91a7eac347326752ee76e49847b194f622429e2c3ca40b2c4d70df6805bf72"} err="failed to get container status \"0b91a7eac347326752ee76e49847b194f622429e2c3ca40b2c4d70df6805bf72\": rpc error: code = NotFound desc = could not find container \"0b91a7eac347326752ee76e49847b194f622429e2c3ca40b2c4d70df6805bf72\": container with ID starting with 0b91a7eac347326752ee76e49847b194f622429e2c3ca40b2c4d70df6805bf72 not found: ID does not exist" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.203988 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/519b14a3-af8d-4238-9bc0-69e13bae0a9e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"519b14a3-af8d-4238-9bc0-69e13bae0a9e\") " pod="openstack/cinder-scheduler-0" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.204092 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzzg6\" (UniqueName: \"kubernetes.io/projected/519b14a3-af8d-4238-9bc0-69e13bae0a9e-kube-api-access-hzzg6\") pod \"cinder-scheduler-0\" (UID: \"519b14a3-af8d-4238-9bc0-69e13bae0a9e\") " pod="openstack/cinder-scheduler-0" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.204129 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/519b14a3-af8d-4238-9bc0-69e13bae0a9e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"519b14a3-af8d-4238-9bc0-69e13bae0a9e\") " pod="openstack/cinder-scheduler-0" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.204149 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/519b14a3-af8d-4238-9bc0-69e13bae0a9e-scripts\") pod \"cinder-scheduler-0\" (UID: \"519b14a3-af8d-4238-9bc0-69e13bae0a9e\") " pod="openstack/cinder-scheduler-0" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.204208 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/519b14a3-af8d-4238-9bc0-69e13bae0a9e-config-data\") pod \"cinder-scheduler-0\" (UID: \"519b14a3-af8d-4238-9bc0-69e13bae0a9e\") " pod="openstack/cinder-scheduler-0" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.204269 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/519b14a3-af8d-4238-9bc0-69e13bae0a9e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"519b14a3-af8d-4238-9bc0-69e13bae0a9e\") " pod="openstack/cinder-scheduler-0" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.305875 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/519b14a3-af8d-4238-9bc0-69e13bae0a9e-config-data\") pod \"cinder-scheduler-0\" (UID: \"519b14a3-af8d-4238-9bc0-69e13bae0a9e\") " pod="openstack/cinder-scheduler-0" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.305940 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/519b14a3-af8d-4238-9bc0-69e13bae0a9e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"519b14a3-af8d-4238-9bc0-69e13bae0a9e\") " pod="openstack/cinder-scheduler-0" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.306047 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/519b14a3-af8d-4238-9bc0-69e13bae0a9e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"519b14a3-af8d-4238-9bc0-69e13bae0a9e\") " pod="openstack/cinder-scheduler-0" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.306076 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzzg6\" (UniqueName: \"kubernetes.io/projected/519b14a3-af8d-4238-9bc0-69e13bae0a9e-kube-api-access-hzzg6\") pod \"cinder-scheduler-0\" (UID: \"519b14a3-af8d-4238-9bc0-69e13bae0a9e\") " pod="openstack/cinder-scheduler-0" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.306111 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/519b14a3-af8d-4238-9bc0-69e13bae0a9e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"519b14a3-af8d-4238-9bc0-69e13bae0a9e\") " pod="openstack/cinder-scheduler-0" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.306136 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/519b14a3-af8d-4238-9bc0-69e13bae0a9e-scripts\") pod \"cinder-scheduler-0\" (UID: \"519b14a3-af8d-4238-9bc0-69e13bae0a9e\") " pod="openstack/cinder-scheduler-0" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.306819 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/519b14a3-af8d-4238-9bc0-69e13bae0a9e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"519b14a3-af8d-4238-9bc0-69e13bae0a9e\") " pod="openstack/cinder-scheduler-0" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.310128 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/519b14a3-af8d-4238-9bc0-69e13bae0a9e-scripts\") pod \"cinder-scheduler-0\" (UID: \"519b14a3-af8d-4238-9bc0-69e13bae0a9e\") " pod="openstack/cinder-scheduler-0" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.312475 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/519b14a3-af8d-4238-9bc0-69e13bae0a9e-config-data\") pod \"cinder-scheduler-0\" (UID: \"519b14a3-af8d-4238-9bc0-69e13bae0a9e\") " pod="openstack/cinder-scheduler-0" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.313737 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/519b14a3-af8d-4238-9bc0-69e13bae0a9e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"519b14a3-af8d-4238-9bc0-69e13bae0a9e\") " pod="openstack/cinder-scheduler-0" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.315524 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/519b14a3-af8d-4238-9bc0-69e13bae0a9e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"519b14a3-af8d-4238-9bc0-69e13bae0a9e\") " pod="openstack/cinder-scheduler-0" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.378393 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzzg6\" (UniqueName: \"kubernetes.io/projected/519b14a3-af8d-4238-9bc0-69e13bae0a9e-kube-api-access-hzzg6\") pod \"cinder-scheduler-0\" (UID: \"519b14a3-af8d-4238-9bc0-69e13bae0a9e\") " pod="openstack/cinder-scheduler-0" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.395407 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 11:14:08 crc kubenswrapper[4619]: I0126 11:14:08.928503 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 11:14:08 crc kubenswrapper[4619]: W0126 11:14:08.956708 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod519b14a3_af8d_4238_9bc0_69e13bae0a9e.slice/crio-54d2b045603766087ffaadb776fb14599e2b3b3c477c585407c9ad6c3a65d176 WatchSource:0}: Error finding container 54d2b045603766087ffaadb776fb14599e2b3b3c477c585407c9ad6c3a65d176: Status 404 returned error can't find the container with id 54d2b045603766087ffaadb776fb14599e2b3b3c477c585407c9ad6c3a65d176 Jan 26 11:14:09 crc kubenswrapper[4619]: I0126 11:14:09.026786 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"519b14a3-af8d-4238-9bc0-69e13bae0a9e","Type":"ContainerStarted","Data":"54d2b045603766087ffaadb776fb14599e2b3b3c477c585407c9ad6c3a65d176"} Jan 26 11:14:09 crc kubenswrapper[4619]: I0126 11:14:09.303254 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f4725e4-fd9d-49c8-b4a4-04d9f855f285" path="/var/lib/kubelet/pods/1f4725e4-fd9d-49c8-b4a4-04d9f855f285/volumes" Jan 26 11:14:10 crc kubenswrapper[4619]: I0126 11:14:10.054743 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"519b14a3-af8d-4238-9bc0-69e13bae0a9e","Type":"ContainerStarted","Data":"2065aba44897174207898196bd21369bf0e3e994b78f2fcc797e4140c43eed8e"} Jan 26 11:14:10 crc kubenswrapper[4619]: I0126 11:14:10.107886 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6f67c775d4-7ls4r" podUID="670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 11:14:10 crc kubenswrapper[4619]: I0126 11:14:10.107955 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6f67c775d4-7ls4r" Jan 26 11:14:10 crc kubenswrapper[4619]: I0126 11:14:10.108672 4619 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"b0e2cbc3edeffa1b4639dbc8cde1e089d22e52bccc9a66bfa5d58fe57443d2fd"} pod="openstack/horizon-6f67c775d4-7ls4r" containerMessage="Container horizon failed startup probe, will be restarted" Jan 26 11:14:10 crc kubenswrapper[4619]: I0126 11:14:10.108710 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6f67c775d4-7ls4r" podUID="670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" containerName="horizon" containerID="cri-o://b0e2cbc3edeffa1b4639dbc8cde1e089d22e52bccc9a66bfa5d58fe57443d2fd" gracePeriod=30 Jan 26 11:14:10 crc kubenswrapper[4619]: I0126 11:14:10.312847 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-846d64d6c4-66jvl" podUID="10c8ed10-dab5-49e5-a030-4be99c720ae0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 11:14:10 crc kubenswrapper[4619]: I0126 11:14:10.312928 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-846d64d6c4-66jvl" Jan 26 11:14:10 crc kubenswrapper[4619]: I0126 11:14:10.313641 4619 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"d27a7212d9be76fc53e45b9d6ccefc04025f9b2c7a9b45834a4e8810c17eaca8"} pod="openstack/horizon-846d64d6c4-66jvl" containerMessage="Container horizon failed startup probe, will be restarted" Jan 26 11:14:10 crc kubenswrapper[4619]: I0126 11:14:10.313680 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-846d64d6c4-66jvl" podUID="10c8ed10-dab5-49e5-a030-4be99c720ae0" containerName="horizon" containerID="cri-o://d27a7212d9be76fc53e45b9d6ccefc04025f9b2c7a9b45834a4e8810c17eaca8" gracePeriod=30 Jan 26 11:14:10 crc kubenswrapper[4619]: I0126 11:14:10.550899 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-b7745f6db-mwnfn" Jan 26 11:14:10 crc kubenswrapper[4619]: I0126 11:14:10.676275 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-b7745f6db-mwnfn" Jan 26 11:14:11 crc kubenswrapper[4619]: I0126 11:14:11.062987 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"519b14a3-af8d-4238-9bc0-69e13bae0a9e","Type":"ContainerStarted","Data":"5f61acfa0dd3f1e166e70e648a1153867a8f79f219a93371cab3d0037ed0fa15"} Jan 26 11:14:11 crc kubenswrapper[4619]: I0126 11:14:11.085023 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.085005599 podStartE2EDuration="3.085005599s" podCreationTimestamp="2026-01-26 11:14:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:14:11.078310814 +0000 UTC m=+1150.112351530" watchObservedRunningTime="2026-01-26 11:14:11.085005599 +0000 UTC m=+1150.119046315" Jan 26 11:14:11 crc kubenswrapper[4619]: I0126 11:14:11.359842 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-75ddf854f7-wtpq9" Jan 26 11:14:11 crc kubenswrapper[4619]: I0126 11:14:11.390214 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-668f5b9c84-9qvth" Jan 26 11:14:11 crc kubenswrapper[4619]: I0126 11:14:11.683362 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-668f5b9c84-9qvth" Jan 26 11:14:11 crc kubenswrapper[4619]: I0126 11:14:11.760521 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-b7745f6db-mwnfn"] Jan 26 11:14:12 crc kubenswrapper[4619]: I0126 11:14:12.073369 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-b7745f6db-mwnfn" podUID="6f6c5e4d-17c2-45e9-add1-0b35026ba69d" containerName="barbican-api-log" containerID="cri-o://4e02f1273032fb82decbba3cfca310f9ace070861ea3b31d5a2089b928c7df0d" gracePeriod=30 Jan 26 11:14:12 crc kubenswrapper[4619]: I0126 11:14:12.073672 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-b7745f6db-mwnfn" podUID="6f6c5e4d-17c2-45e9-add1-0b35026ba69d" containerName="barbican-api" containerID="cri-o://1d8deff64f610eca0b2e311b14fd4e35a567c21f9c5a90dc6ff5a2427a236525" gracePeriod=30 Jan 26 11:14:13 crc kubenswrapper[4619]: I0126 11:14:13.082400 4619 generic.go:334] "Generic (PLEG): container finished" podID="6f6c5e4d-17c2-45e9-add1-0b35026ba69d" containerID="4e02f1273032fb82decbba3cfca310f9ace070861ea3b31d5a2089b928c7df0d" exitCode=143 Jan 26 11:14:13 crc kubenswrapper[4619]: I0126 11:14:13.082442 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b7745f6db-mwnfn" event={"ID":"6f6c5e4d-17c2-45e9-add1-0b35026ba69d","Type":"ContainerDied","Data":"4e02f1273032fb82decbba3cfca310f9ace070861ea3b31d5a2089b928c7df0d"} Jan 26 11:14:13 crc kubenswrapper[4619]: I0126 11:14:13.396380 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 26 11:14:14 crc kubenswrapper[4619]: I0126 11:14:14.092860 4619 generic.go:334] "Generic (PLEG): container finished" podID="670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" containerID="b0e2cbc3edeffa1b4639dbc8cde1e089d22e52bccc9a66bfa5d58fe57443d2fd" exitCode=0 Jan 26 11:14:14 crc kubenswrapper[4619]: I0126 11:14:14.092901 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f67c775d4-7ls4r" event={"ID":"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d","Type":"ContainerDied","Data":"b0e2cbc3edeffa1b4639dbc8cde1e089d22e52bccc9a66bfa5d58fe57443d2fd"} Jan 26 11:14:14 crc kubenswrapper[4619]: I0126 11:14:14.092927 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f67c775d4-7ls4r" event={"ID":"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d","Type":"ContainerStarted","Data":"b56fd8a7dbb2c8b1978a088101e7acddc67948bd399d538c2472832c6cffbd25"} Jan 26 11:14:14 crc kubenswrapper[4619]: I0126 11:14:14.234459 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:14:14 crc kubenswrapper[4619]: I0126 11:14:14.234516 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:14:15 crc kubenswrapper[4619]: I0126 11:14:15.112797 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6f67c775d4-7ls4r" Jan 26 11:14:15 crc kubenswrapper[4619]: I0126 11:14:15.113273 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6f67c775d4-7ls4r" Jan 26 11:14:15 crc kubenswrapper[4619]: I0126 11:14:15.136776 4619 generic.go:334] "Generic (PLEG): container finished" podID="10c8ed10-dab5-49e5-a030-4be99c720ae0" containerID="d27a7212d9be76fc53e45b9d6ccefc04025f9b2c7a9b45834a4e8810c17eaca8" exitCode=0 Jan 26 11:14:15 crc kubenswrapper[4619]: I0126 11:14:15.137920 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-846d64d6c4-66jvl" event={"ID":"10c8ed10-dab5-49e5-a030-4be99c720ae0","Type":"ContainerDied","Data":"d27a7212d9be76fc53e45b9d6ccefc04025f9b2c7a9b45834a4e8810c17eaca8"} Jan 26 11:14:15 crc kubenswrapper[4619]: I0126 11:14:15.137949 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-846d64d6c4-66jvl" event={"ID":"10c8ed10-dab5-49e5-a030-4be99c720ae0","Type":"ContainerStarted","Data":"5acc355ec9d0cbcafc926fe73648e59b79299ae0b0d700b1a40325e367be7d11"} Jan 26 11:14:15 crc kubenswrapper[4619]: I0126 11:14:15.329605 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-846d64d6c4-66jvl" Jan 26 11:14:15 crc kubenswrapper[4619]: I0126 11:14:15.329945 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-846d64d6c4-66jvl" Jan 26 11:14:15 crc kubenswrapper[4619]: I0126 11:14:15.388417 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 26 11:14:15 crc kubenswrapper[4619]: I0126 11:14:15.390094 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 11:14:15 crc kubenswrapper[4619]: I0126 11:14:15.393543 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 26 11:14:15 crc kubenswrapper[4619]: I0126 11:14:15.394880 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 26 11:14:15 crc kubenswrapper[4619]: I0126 11:14:15.401469 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 11:14:15 crc kubenswrapper[4619]: I0126 11:14:15.402636 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-wvbld" Jan 26 11:14:15 crc kubenswrapper[4619]: I0126 11:14:15.437251 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 26 11:14:15 crc kubenswrapper[4619]: I0126 11:14:15.450771 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4db787-7749-4a67-a52a-b8c4f3229c65-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5a4db787-7749-4a67-a52a-b8c4f3229c65\") " pod="openstack/openstackclient" Jan 26 11:14:15 crc kubenswrapper[4619]: I0126 11:14:15.450818 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5a4db787-7749-4a67-a52a-b8c4f3229c65-openstack-config\") pod \"openstackclient\" (UID: \"5a4db787-7749-4a67-a52a-b8c4f3229c65\") " pod="openstack/openstackclient" Jan 26 11:14:15 crc kubenswrapper[4619]: I0126 11:14:15.450848 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5a4db787-7749-4a67-a52a-b8c4f3229c65-openstack-config-secret\") pod \"openstackclient\" (UID: \"5a4db787-7749-4a67-a52a-b8c4f3229c65\") " pod="openstack/openstackclient" Jan 26 11:14:15 crc kubenswrapper[4619]: I0126 11:14:15.450893 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x22vq\" (UniqueName: \"kubernetes.io/projected/5a4db787-7749-4a67-a52a-b8c4f3229c65-kube-api-access-x22vq\") pod \"openstackclient\" (UID: \"5a4db787-7749-4a67-a52a-b8c4f3229c65\") " pod="openstack/openstackclient" Jan 26 11:14:15 crc kubenswrapper[4619]: I0126 11:14:15.552243 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4db787-7749-4a67-a52a-b8c4f3229c65-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5a4db787-7749-4a67-a52a-b8c4f3229c65\") " pod="openstack/openstackclient" Jan 26 11:14:15 crc kubenswrapper[4619]: I0126 11:14:15.552526 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5a4db787-7749-4a67-a52a-b8c4f3229c65-openstack-config\") pod \"openstackclient\" (UID: \"5a4db787-7749-4a67-a52a-b8c4f3229c65\") " pod="openstack/openstackclient" Jan 26 11:14:15 crc kubenswrapper[4619]: I0126 11:14:15.553359 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5a4db787-7749-4a67-a52a-b8c4f3229c65-openstack-config\") pod \"openstackclient\" (UID: \"5a4db787-7749-4a67-a52a-b8c4f3229c65\") " pod="openstack/openstackclient" Jan 26 11:14:15 crc kubenswrapper[4619]: I0126 11:14:15.553425 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5a4db787-7749-4a67-a52a-b8c4f3229c65-openstack-config-secret\") pod \"openstackclient\" (UID: \"5a4db787-7749-4a67-a52a-b8c4f3229c65\") " pod="openstack/openstackclient" Jan 26 11:14:15 crc kubenswrapper[4619]: I0126 11:14:15.555653 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x22vq\" (UniqueName: \"kubernetes.io/projected/5a4db787-7749-4a67-a52a-b8c4f3229c65-kube-api-access-x22vq\") pod \"openstackclient\" (UID: \"5a4db787-7749-4a67-a52a-b8c4f3229c65\") " pod="openstack/openstackclient" Jan 26 11:14:15 crc kubenswrapper[4619]: I0126 11:14:15.576816 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4db787-7749-4a67-a52a-b8c4f3229c65-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5a4db787-7749-4a67-a52a-b8c4f3229c65\") " pod="openstack/openstackclient" Jan 26 11:14:15 crc kubenswrapper[4619]: I0126 11:14:15.577275 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x22vq\" (UniqueName: \"kubernetes.io/projected/5a4db787-7749-4a67-a52a-b8c4f3229c65-kube-api-access-x22vq\") pod \"openstackclient\" (UID: \"5a4db787-7749-4a67-a52a-b8c4f3229c65\") " pod="openstack/openstackclient" Jan 26 11:14:15 crc kubenswrapper[4619]: I0126 11:14:15.586138 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5a4db787-7749-4a67-a52a-b8c4f3229c65-openstack-config-secret\") pod \"openstackclient\" (UID: \"5a4db787-7749-4a67-a52a-b8c4f3229c65\") " pod="openstack/openstackclient" Jan 26 11:14:15 crc kubenswrapper[4619]: I0126 11:14:15.627022 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b7745f6db-mwnfn" podUID="6f6c5e4d-17c2-45e9-add1-0b35026ba69d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": dial tcp 10.217.0.164:9311: connect: connection refused" Jan 26 11:14:15 crc kubenswrapper[4619]: I0126 11:14:15.627092 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b7745f6db-mwnfn" podUID="6f6c5e4d-17c2-45e9-add1-0b35026ba69d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": dial tcp 10.217.0.164:9311: connect: connection refused" Jan 26 11:14:15 crc kubenswrapper[4619]: I0126 11:14:15.705093 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 11:14:16 crc kubenswrapper[4619]: I0126 11:14:16.153915 4619 generic.go:334] "Generic (PLEG): container finished" podID="6f6c5e4d-17c2-45e9-add1-0b35026ba69d" containerID="1d8deff64f610eca0b2e311b14fd4e35a567c21f9c5a90dc6ff5a2427a236525" exitCode=0 Jan 26 11:14:16 crc kubenswrapper[4619]: I0126 11:14:16.153999 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b7745f6db-mwnfn" event={"ID":"6f6c5e4d-17c2-45e9-add1-0b35026ba69d","Type":"ContainerDied","Data":"1d8deff64f610eca0b2e311b14fd4e35a567c21f9c5a90dc6ff5a2427a236525"} Jan 26 11:14:16 crc kubenswrapper[4619]: I0126 11:14:16.287862 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b7745f6db-mwnfn" Jan 26 11:14:16 crc kubenswrapper[4619]: I0126 11:14:16.375427 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 11:14:16 crc kubenswrapper[4619]: I0126 11:14:16.392155 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f6c5e4d-17c2-45e9-add1-0b35026ba69d-logs\") pod \"6f6c5e4d-17c2-45e9-add1-0b35026ba69d\" (UID: \"6f6c5e4d-17c2-45e9-add1-0b35026ba69d\") " Jan 26 11:14:16 crc kubenswrapper[4619]: I0126 11:14:16.392229 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6c5e4d-17c2-45e9-add1-0b35026ba69d-combined-ca-bundle\") pod \"6f6c5e4d-17c2-45e9-add1-0b35026ba69d\" (UID: \"6f6c5e4d-17c2-45e9-add1-0b35026ba69d\") " Jan 26 11:14:16 crc kubenswrapper[4619]: I0126 11:14:16.392284 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f6c5e4d-17c2-45e9-add1-0b35026ba69d-config-data-custom\") pod \"6f6c5e4d-17c2-45e9-add1-0b35026ba69d\" (UID: \"6f6c5e4d-17c2-45e9-add1-0b35026ba69d\") " Jan 26 11:14:16 crc kubenswrapper[4619]: I0126 11:14:16.392423 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f6c5e4d-17c2-45e9-add1-0b35026ba69d-config-data\") pod \"6f6c5e4d-17c2-45e9-add1-0b35026ba69d\" (UID: \"6f6c5e4d-17c2-45e9-add1-0b35026ba69d\") " Jan 26 11:14:16 crc kubenswrapper[4619]: I0126 11:14:16.392447 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46fk4\" (UniqueName: \"kubernetes.io/projected/6f6c5e4d-17c2-45e9-add1-0b35026ba69d-kube-api-access-46fk4\") pod \"6f6c5e4d-17c2-45e9-add1-0b35026ba69d\" (UID: \"6f6c5e4d-17c2-45e9-add1-0b35026ba69d\") " Jan 26 11:14:16 crc kubenswrapper[4619]: I0126 11:14:16.394375 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f6c5e4d-17c2-45e9-add1-0b35026ba69d-logs" (OuterVolumeSpecName: "logs") pod "6f6c5e4d-17c2-45e9-add1-0b35026ba69d" (UID: "6f6c5e4d-17c2-45e9-add1-0b35026ba69d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:14:16 crc kubenswrapper[4619]: I0126 11:14:16.405203 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f6c5e4d-17c2-45e9-add1-0b35026ba69d-kube-api-access-46fk4" (OuterVolumeSpecName: "kube-api-access-46fk4") pod "6f6c5e4d-17c2-45e9-add1-0b35026ba69d" (UID: "6f6c5e4d-17c2-45e9-add1-0b35026ba69d"). InnerVolumeSpecName "kube-api-access-46fk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:14:16 crc kubenswrapper[4619]: I0126 11:14:16.408515 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f6c5e4d-17c2-45e9-add1-0b35026ba69d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6f6c5e4d-17c2-45e9-add1-0b35026ba69d" (UID: "6f6c5e4d-17c2-45e9-add1-0b35026ba69d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:16 crc kubenswrapper[4619]: I0126 11:14:16.434858 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f6c5e4d-17c2-45e9-add1-0b35026ba69d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f6c5e4d-17c2-45e9-add1-0b35026ba69d" (UID: "6f6c5e4d-17c2-45e9-add1-0b35026ba69d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:16 crc kubenswrapper[4619]: I0126 11:14:16.479679 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f6c5e4d-17c2-45e9-add1-0b35026ba69d-config-data" (OuterVolumeSpecName: "config-data") pod "6f6c5e4d-17c2-45e9-add1-0b35026ba69d" (UID: "6f6c5e4d-17c2-45e9-add1-0b35026ba69d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:16 crc kubenswrapper[4619]: I0126 11:14:16.494905 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6c5e4d-17c2-45e9-add1-0b35026ba69d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:16 crc kubenswrapper[4619]: I0126 11:14:16.494935 4619 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f6c5e4d-17c2-45e9-add1-0b35026ba69d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:16 crc kubenswrapper[4619]: I0126 11:14:16.494944 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f6c5e4d-17c2-45e9-add1-0b35026ba69d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:16 crc kubenswrapper[4619]: I0126 11:14:16.494952 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46fk4\" (UniqueName: \"kubernetes.io/projected/6f6c5e4d-17c2-45e9-add1-0b35026ba69d-kube-api-access-46fk4\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:16 crc kubenswrapper[4619]: I0126 11:14:16.494961 4619 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f6c5e4d-17c2-45e9-add1-0b35026ba69d-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:17 crc kubenswrapper[4619]: I0126 11:14:17.171415 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b7745f6db-mwnfn" event={"ID":"6f6c5e4d-17c2-45e9-add1-0b35026ba69d","Type":"ContainerDied","Data":"8d0d0597138fd708bab64a049e6efdf65481b105eafe70c80d7017f6e8556b48"} Jan 26 11:14:17 crc kubenswrapper[4619]: I0126 11:14:17.172596 4619 scope.go:117] "RemoveContainer" containerID="1d8deff64f610eca0b2e311b14fd4e35a567c21f9c5a90dc6ff5a2427a236525" Jan 26 11:14:17 crc kubenswrapper[4619]: I0126 11:14:17.171711 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b7745f6db-mwnfn" Jan 26 11:14:17 crc kubenswrapper[4619]: I0126 11:14:17.176857 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5a4db787-7749-4a67-a52a-b8c4f3229c65","Type":"ContainerStarted","Data":"6731be98e054505c0930691461d233cf60c19aee829cb2354c7decd90b031b9e"} Jan 26 11:14:17 crc kubenswrapper[4619]: I0126 11:14:17.204279 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-b7745f6db-mwnfn"] Jan 26 11:14:17 crc kubenswrapper[4619]: I0126 11:14:17.204549 4619 scope.go:117] "RemoveContainer" containerID="4e02f1273032fb82decbba3cfca310f9ace070861ea3b31d5a2089b928c7df0d" Jan 26 11:14:17 crc kubenswrapper[4619]: I0126 11:14:17.219347 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-b7745f6db-mwnfn"] Jan 26 11:14:17 crc kubenswrapper[4619]: I0126 11:14:17.275723 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f6c5e4d-17c2-45e9-add1-0b35026ba69d" path="/var/lib/kubelet/pods/6f6c5e4d-17c2-45e9-add1-0b35026ba69d/volumes" Jan 26 11:14:18 crc kubenswrapper[4619]: I0126 11:14:18.613819 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.673685 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-659c4b6587-4stqp"] Jan 26 11:14:22 crc kubenswrapper[4619]: E0126 11:14:22.674695 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f6c5e4d-17c2-45e9-add1-0b35026ba69d" containerName="barbican-api-log" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.674709 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f6c5e4d-17c2-45e9-add1-0b35026ba69d" containerName="barbican-api-log" Jan 26 11:14:22 crc kubenswrapper[4619]: E0126 11:14:22.674725 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f6c5e4d-17c2-45e9-add1-0b35026ba69d" containerName="barbican-api" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.674731 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f6c5e4d-17c2-45e9-add1-0b35026ba69d" containerName="barbican-api" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.674938 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f6c5e4d-17c2-45e9-add1-0b35026ba69d" containerName="barbican-api" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.674952 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f6c5e4d-17c2-45e9-add1-0b35026ba69d" containerName="barbican-api-log" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.675985 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-659c4b6587-4stqp" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.682599 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.682929 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.683042 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.714212 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-659c4b6587-4stqp"] Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.846640 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faba43f0-103d-43e7-9f3f-ef5be7ee8fe1-combined-ca-bundle\") pod \"swift-proxy-659c4b6587-4stqp\" (UID: \"faba43f0-103d-43e7-9f3f-ef5be7ee8fe1\") " pod="openstack/swift-proxy-659c4b6587-4stqp" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.846684 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/faba43f0-103d-43e7-9f3f-ef5be7ee8fe1-internal-tls-certs\") pod \"swift-proxy-659c4b6587-4stqp\" (UID: \"faba43f0-103d-43e7-9f3f-ef5be7ee8fe1\") " pod="openstack/swift-proxy-659c4b6587-4stqp" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.846708 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/faba43f0-103d-43e7-9f3f-ef5be7ee8fe1-run-httpd\") pod \"swift-proxy-659c4b6587-4stqp\" (UID: \"faba43f0-103d-43e7-9f3f-ef5be7ee8fe1\") " pod="openstack/swift-proxy-659c4b6587-4stqp" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.846752 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzdvk\" (UniqueName: \"kubernetes.io/projected/faba43f0-103d-43e7-9f3f-ef5be7ee8fe1-kube-api-access-bzdvk\") pod \"swift-proxy-659c4b6587-4stqp\" (UID: \"faba43f0-103d-43e7-9f3f-ef5be7ee8fe1\") " pod="openstack/swift-proxy-659c4b6587-4stqp" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.846768 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/faba43f0-103d-43e7-9f3f-ef5be7ee8fe1-etc-swift\") pod \"swift-proxy-659c4b6587-4stqp\" (UID: \"faba43f0-103d-43e7-9f3f-ef5be7ee8fe1\") " pod="openstack/swift-proxy-659c4b6587-4stqp" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.846829 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/faba43f0-103d-43e7-9f3f-ef5be7ee8fe1-public-tls-certs\") pod \"swift-proxy-659c4b6587-4stqp\" (UID: \"faba43f0-103d-43e7-9f3f-ef5be7ee8fe1\") " pod="openstack/swift-proxy-659c4b6587-4stqp" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.846908 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/faba43f0-103d-43e7-9f3f-ef5be7ee8fe1-log-httpd\") pod \"swift-proxy-659c4b6587-4stqp\" (UID: \"faba43f0-103d-43e7-9f3f-ef5be7ee8fe1\") " pod="openstack/swift-proxy-659c4b6587-4stqp" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.846964 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faba43f0-103d-43e7-9f3f-ef5be7ee8fe1-config-data\") pod \"swift-proxy-659c4b6587-4stqp\" (UID: \"faba43f0-103d-43e7-9f3f-ef5be7ee8fe1\") " pod="openstack/swift-proxy-659c4b6587-4stqp" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.948887 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faba43f0-103d-43e7-9f3f-ef5be7ee8fe1-config-data\") pod \"swift-proxy-659c4b6587-4stqp\" (UID: \"faba43f0-103d-43e7-9f3f-ef5be7ee8fe1\") " pod="openstack/swift-proxy-659c4b6587-4stqp" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.948961 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faba43f0-103d-43e7-9f3f-ef5be7ee8fe1-combined-ca-bundle\") pod \"swift-proxy-659c4b6587-4stqp\" (UID: \"faba43f0-103d-43e7-9f3f-ef5be7ee8fe1\") " pod="openstack/swift-proxy-659c4b6587-4stqp" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.948980 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/faba43f0-103d-43e7-9f3f-ef5be7ee8fe1-internal-tls-certs\") pod \"swift-proxy-659c4b6587-4stqp\" (UID: \"faba43f0-103d-43e7-9f3f-ef5be7ee8fe1\") " pod="openstack/swift-proxy-659c4b6587-4stqp" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.948996 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/faba43f0-103d-43e7-9f3f-ef5be7ee8fe1-run-httpd\") pod \"swift-proxy-659c4b6587-4stqp\" (UID: \"faba43f0-103d-43e7-9f3f-ef5be7ee8fe1\") " pod="openstack/swift-proxy-659c4b6587-4stqp" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.949037 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzdvk\" (UniqueName: \"kubernetes.io/projected/faba43f0-103d-43e7-9f3f-ef5be7ee8fe1-kube-api-access-bzdvk\") pod \"swift-proxy-659c4b6587-4stqp\" (UID: \"faba43f0-103d-43e7-9f3f-ef5be7ee8fe1\") " pod="openstack/swift-proxy-659c4b6587-4stqp" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.949056 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/faba43f0-103d-43e7-9f3f-ef5be7ee8fe1-etc-swift\") pod \"swift-proxy-659c4b6587-4stqp\" (UID: \"faba43f0-103d-43e7-9f3f-ef5be7ee8fe1\") " pod="openstack/swift-proxy-659c4b6587-4stqp" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.949090 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/faba43f0-103d-43e7-9f3f-ef5be7ee8fe1-public-tls-certs\") pod \"swift-proxy-659c4b6587-4stqp\" (UID: \"faba43f0-103d-43e7-9f3f-ef5be7ee8fe1\") " pod="openstack/swift-proxy-659c4b6587-4stqp" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.949144 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/faba43f0-103d-43e7-9f3f-ef5be7ee8fe1-log-httpd\") pod \"swift-proxy-659c4b6587-4stqp\" (UID: \"faba43f0-103d-43e7-9f3f-ef5be7ee8fe1\") " pod="openstack/swift-proxy-659c4b6587-4stqp" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.949571 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/faba43f0-103d-43e7-9f3f-ef5be7ee8fe1-log-httpd\") pod \"swift-proxy-659c4b6587-4stqp\" (UID: \"faba43f0-103d-43e7-9f3f-ef5be7ee8fe1\") " pod="openstack/swift-proxy-659c4b6587-4stqp" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.949825 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/faba43f0-103d-43e7-9f3f-ef5be7ee8fe1-run-httpd\") pod \"swift-proxy-659c4b6587-4stqp\" (UID: \"faba43f0-103d-43e7-9f3f-ef5be7ee8fe1\") " pod="openstack/swift-proxy-659c4b6587-4stqp" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.955304 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faba43f0-103d-43e7-9f3f-ef5be7ee8fe1-config-data\") pod \"swift-proxy-659c4b6587-4stqp\" (UID: \"faba43f0-103d-43e7-9f3f-ef5be7ee8fe1\") " pod="openstack/swift-proxy-659c4b6587-4stqp" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.958113 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faba43f0-103d-43e7-9f3f-ef5be7ee8fe1-combined-ca-bundle\") pod \"swift-proxy-659c4b6587-4stqp\" (UID: \"faba43f0-103d-43e7-9f3f-ef5be7ee8fe1\") " pod="openstack/swift-proxy-659c4b6587-4stqp" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.958178 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/faba43f0-103d-43e7-9f3f-ef5be7ee8fe1-internal-tls-certs\") pod \"swift-proxy-659c4b6587-4stqp\" (UID: \"faba43f0-103d-43e7-9f3f-ef5be7ee8fe1\") " pod="openstack/swift-proxy-659c4b6587-4stqp" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.963250 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/faba43f0-103d-43e7-9f3f-ef5be7ee8fe1-etc-swift\") pod \"swift-proxy-659c4b6587-4stqp\" (UID: \"faba43f0-103d-43e7-9f3f-ef5be7ee8fe1\") " pod="openstack/swift-proxy-659c4b6587-4stqp" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.973459 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/faba43f0-103d-43e7-9f3f-ef5be7ee8fe1-public-tls-certs\") pod \"swift-proxy-659c4b6587-4stqp\" (UID: \"faba43f0-103d-43e7-9f3f-ef5be7ee8fe1\") " pod="openstack/swift-proxy-659c4b6587-4stqp" Jan 26 11:14:22 crc kubenswrapper[4619]: I0126 11:14:22.983167 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzdvk\" (UniqueName: \"kubernetes.io/projected/faba43f0-103d-43e7-9f3f-ef5be7ee8fe1-kube-api-access-bzdvk\") pod \"swift-proxy-659c4b6587-4stqp\" (UID: \"faba43f0-103d-43e7-9f3f-ef5be7ee8fe1\") " pod="openstack/swift-proxy-659c4b6587-4stqp" Jan 26 11:14:23 crc kubenswrapper[4619]: I0126 11:14:23.018523 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-659c4b6587-4stqp" Jan 26 11:14:25 crc kubenswrapper[4619]: I0126 11:14:25.103400 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6f67c775d4-7ls4r" podUID="670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 26 11:14:25 crc kubenswrapper[4619]: I0126 11:14:25.307375 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-846d64d6c4-66jvl" podUID="10c8ed10-dab5-49e5-a030-4be99c720ae0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 26 11:14:27 crc kubenswrapper[4619]: I0126 11:14:27.066318 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:14:27 crc kubenswrapper[4619]: I0126 11:14:27.066837 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="92c0c735-b411-4f87-9dcd-4cf4565ba828" containerName="ceilometer-central-agent" containerID="cri-o://54a926ce22438ccc8211f91636d4a6d6ca5eae6b7db0ebe9f5ef47864e4ad17c" gracePeriod=30 Jan 26 11:14:27 crc kubenswrapper[4619]: I0126 11:14:27.066910 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="92c0c735-b411-4f87-9dcd-4cf4565ba828" containerName="proxy-httpd" containerID="cri-o://c89f58dfa2810e46f9f747a195373f2f76c4c29b7503149c7927282f24d5d08b" gracePeriod=30 Jan 26 11:14:27 crc kubenswrapper[4619]: I0126 11:14:27.067072 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="92c0c735-b411-4f87-9dcd-4cf4565ba828" containerName="sg-core" containerID="cri-o://292987db39c6d666bcd3e8631603d5bffb0eb8b0dc4a2f180ea3ac43034ee4bb" gracePeriod=30 Jan 26 11:14:27 crc kubenswrapper[4619]: I0126 11:14:27.067121 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="92c0c735-b411-4f87-9dcd-4cf4565ba828" containerName="ceilometer-notification-agent" containerID="cri-o://81ee215a36a20ef418ea92d25f84a906b42a936638f2f1ffbabacde241c5bd7d" gracePeriod=30 Jan 26 11:14:27 crc kubenswrapper[4619]: I0126 11:14:27.085324 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="92c0c735-b411-4f87-9dcd-4cf4565ba828" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.169:3000/\": EOF" Jan 26 11:14:27 crc kubenswrapper[4619]: I0126 11:14:27.311187 4619 generic.go:334] "Generic (PLEG): container finished" podID="92c0c735-b411-4f87-9dcd-4cf4565ba828" containerID="c89f58dfa2810e46f9f747a195373f2f76c4c29b7503149c7927282f24d5d08b" exitCode=0 Jan 26 11:14:27 crc kubenswrapper[4619]: I0126 11:14:27.311482 4619 generic.go:334] "Generic (PLEG): container finished" podID="92c0c735-b411-4f87-9dcd-4cf4565ba828" containerID="292987db39c6d666bcd3e8631603d5bffb0eb8b0dc4a2f180ea3ac43034ee4bb" exitCode=2 Jan 26 11:14:27 crc kubenswrapper[4619]: I0126 11:14:27.311223 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92c0c735-b411-4f87-9dcd-4cf4565ba828","Type":"ContainerDied","Data":"c89f58dfa2810e46f9f747a195373f2f76c4c29b7503149c7927282f24d5d08b"} Jan 26 11:14:27 crc kubenswrapper[4619]: I0126 11:14:27.311517 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92c0c735-b411-4f87-9dcd-4cf4565ba828","Type":"ContainerDied","Data":"292987db39c6d666bcd3e8631603d5bffb0eb8b0dc4a2f180ea3ac43034ee4bb"} Jan 26 11:14:28 crc kubenswrapper[4619]: I0126 11:14:28.139694 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="92c0c735-b411-4f87-9dcd-4cf4565ba828" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.169:3000/\": dial tcp 10.217.0.169:3000: connect: connection refused" Jan 26 11:14:28 crc kubenswrapper[4619]: I0126 11:14:28.326146 4619 generic.go:334] "Generic (PLEG): container finished" podID="92c0c735-b411-4f87-9dcd-4cf4565ba828" containerID="54a926ce22438ccc8211f91636d4a6d6ca5eae6b7db0ebe9f5ef47864e4ad17c" exitCode=0 Jan 26 11:14:28 crc kubenswrapper[4619]: I0126 11:14:28.326192 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92c0c735-b411-4f87-9dcd-4cf4565ba828","Type":"ContainerDied","Data":"54a926ce22438ccc8211f91636d4a6d6ca5eae6b7db0ebe9f5ef47864e4ad17c"} Jan 26 11:14:28 crc kubenswrapper[4619]: I0126 11:14:28.442333 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:14:28 crc kubenswrapper[4619]: I0126 11:14:28.442587 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6838e991-4c5f-4e90-8cd7-2c92dff641fe" containerName="glance-log" containerID="cri-o://2087dbd274c5e46fa8efc93bbb6bce0f418053bf84f1a79f9f06ee34b486af9b" gracePeriod=30 Jan 26 11:14:28 crc kubenswrapper[4619]: I0126 11:14:28.443020 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6838e991-4c5f-4e90-8cd7-2c92dff641fe" containerName="glance-httpd" containerID="cri-o://6972a4cfab72cbad67ea69cd51c2eac00e4386665a4c7648b3d1750a1f4b8a5c" gracePeriod=30 Jan 26 11:14:28 crc kubenswrapper[4619]: I0126 11:14:28.911099 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-54d868dd9-v7bwm" Jan 26 11:14:29 crc kubenswrapper[4619]: I0126 11:14:29.044500 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-66768f896-nrg7c"] Jan 26 11:14:29 crc kubenswrapper[4619]: I0126 11:14:29.059033 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-66768f896-nrg7c" podUID="fc689870-cc3b-4d35-968a-78b787569209" containerName="neutron-api" containerID="cri-o://880ffa035f4f1105fb675a7b032ff779645c85f1b2469b696eae34ade35b1fb6" gracePeriod=30 Jan 26 11:14:29 crc kubenswrapper[4619]: I0126 11:14:29.059324 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-66768f896-nrg7c" podUID="fc689870-cc3b-4d35-968a-78b787569209" containerName="neutron-httpd" containerID="cri-o://8aa91cf4eb39ce34143f69a832c7b185190ef5c8466a9ab8a35cbc86e6566305" gracePeriod=30 Jan 26 11:14:29 crc kubenswrapper[4619]: I0126 11:14:29.456860 4619 generic.go:334] "Generic (PLEG): container finished" podID="92c0c735-b411-4f87-9dcd-4cf4565ba828" containerID="81ee215a36a20ef418ea92d25f84a906b42a936638f2f1ffbabacde241c5bd7d" exitCode=0 Jan 26 11:14:29 crc kubenswrapper[4619]: I0126 11:14:29.457169 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92c0c735-b411-4f87-9dcd-4cf4565ba828","Type":"ContainerDied","Data":"81ee215a36a20ef418ea92d25f84a906b42a936638f2f1ffbabacde241c5bd7d"} Jan 26 11:14:29 crc kubenswrapper[4619]: I0126 11:14:29.466550 4619 generic.go:334] "Generic (PLEG): container finished" podID="6838e991-4c5f-4e90-8cd7-2c92dff641fe" containerID="2087dbd274c5e46fa8efc93bbb6bce0f418053bf84f1a79f9f06ee34b486af9b" exitCode=143 Jan 26 11:14:29 crc kubenswrapper[4619]: I0126 11:14:29.466659 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6838e991-4c5f-4e90-8cd7-2c92dff641fe","Type":"ContainerDied","Data":"2087dbd274c5e46fa8efc93bbb6bce0f418053bf84f1a79f9f06ee34b486af9b"} Jan 26 11:14:29 crc kubenswrapper[4619]: I0126 11:14:29.721929 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:14:29 crc kubenswrapper[4619]: I0126 11:14:29.799545 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdqqt\" (UniqueName: \"kubernetes.io/projected/92c0c735-b411-4f87-9dcd-4cf4565ba828-kube-api-access-mdqqt\") pod \"92c0c735-b411-4f87-9dcd-4cf4565ba828\" (UID: \"92c0c735-b411-4f87-9dcd-4cf4565ba828\") " Jan 26 11:14:29 crc kubenswrapper[4619]: I0126 11:14:29.799629 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92c0c735-b411-4f87-9dcd-4cf4565ba828-run-httpd\") pod \"92c0c735-b411-4f87-9dcd-4cf4565ba828\" (UID: \"92c0c735-b411-4f87-9dcd-4cf4565ba828\") " Jan 26 11:14:29 crc kubenswrapper[4619]: I0126 11:14:29.799657 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/92c0c735-b411-4f87-9dcd-4cf4565ba828-sg-core-conf-yaml\") pod \"92c0c735-b411-4f87-9dcd-4cf4565ba828\" (UID: \"92c0c735-b411-4f87-9dcd-4cf4565ba828\") " Jan 26 11:14:29 crc kubenswrapper[4619]: I0126 11:14:29.799705 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92c0c735-b411-4f87-9dcd-4cf4565ba828-scripts\") pod \"92c0c735-b411-4f87-9dcd-4cf4565ba828\" (UID: \"92c0c735-b411-4f87-9dcd-4cf4565ba828\") " Jan 26 11:14:29 crc kubenswrapper[4619]: I0126 11:14:29.799758 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92c0c735-b411-4f87-9dcd-4cf4565ba828-config-data\") pod \"92c0c735-b411-4f87-9dcd-4cf4565ba828\" (UID: \"92c0c735-b411-4f87-9dcd-4cf4565ba828\") " Jan 26 11:14:29 crc kubenswrapper[4619]: I0126 11:14:29.799803 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92c0c735-b411-4f87-9dcd-4cf4565ba828-log-httpd\") pod \"92c0c735-b411-4f87-9dcd-4cf4565ba828\" (UID: \"92c0c735-b411-4f87-9dcd-4cf4565ba828\") " Jan 26 11:14:29 crc kubenswrapper[4619]: I0126 11:14:29.799866 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92c0c735-b411-4f87-9dcd-4cf4565ba828-combined-ca-bundle\") pod \"92c0c735-b411-4f87-9dcd-4cf4565ba828\" (UID: \"92c0c735-b411-4f87-9dcd-4cf4565ba828\") " Jan 26 11:14:29 crc kubenswrapper[4619]: I0126 11:14:29.825697 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92c0c735-b411-4f87-9dcd-4cf4565ba828-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "92c0c735-b411-4f87-9dcd-4cf4565ba828" (UID: "92c0c735-b411-4f87-9dcd-4cf4565ba828"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:14:29 crc kubenswrapper[4619]: I0126 11:14:29.826939 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92c0c735-b411-4f87-9dcd-4cf4565ba828-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "92c0c735-b411-4f87-9dcd-4cf4565ba828" (UID: "92c0c735-b411-4f87-9dcd-4cf4565ba828"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:14:29 crc kubenswrapper[4619]: I0126 11:14:29.848881 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92c0c735-b411-4f87-9dcd-4cf4565ba828-kube-api-access-mdqqt" (OuterVolumeSpecName: "kube-api-access-mdqqt") pod "92c0c735-b411-4f87-9dcd-4cf4565ba828" (UID: "92c0c735-b411-4f87-9dcd-4cf4565ba828"). InnerVolumeSpecName "kube-api-access-mdqqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:14:29 crc kubenswrapper[4619]: I0126 11:14:29.869877 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-659c4b6587-4stqp"] Jan 26 11:14:29 crc kubenswrapper[4619]: I0126 11:14:29.887762 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92c0c735-b411-4f87-9dcd-4cf4565ba828-scripts" (OuterVolumeSpecName: "scripts") pod "92c0c735-b411-4f87-9dcd-4cf4565ba828" (UID: "92c0c735-b411-4f87-9dcd-4cf4565ba828"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:29 crc kubenswrapper[4619]: I0126 11:14:29.901324 4619 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92c0c735-b411-4f87-9dcd-4cf4565ba828-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:29 crc kubenswrapper[4619]: I0126 11:14:29.901351 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdqqt\" (UniqueName: \"kubernetes.io/projected/92c0c735-b411-4f87-9dcd-4cf4565ba828-kube-api-access-mdqqt\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:29 crc kubenswrapper[4619]: I0126 11:14:29.901366 4619 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92c0c735-b411-4f87-9dcd-4cf4565ba828-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:29 crc kubenswrapper[4619]: I0126 11:14:29.901377 4619 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92c0c735-b411-4f87-9dcd-4cf4565ba828-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.026757 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92c0c735-b411-4f87-9dcd-4cf4565ba828-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "92c0c735-b411-4f87-9dcd-4cf4565ba828" (UID: "92c0c735-b411-4f87-9dcd-4cf4565ba828"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.084170 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92c0c735-b411-4f87-9dcd-4cf4565ba828-config-data" (OuterVolumeSpecName: "config-data") pod "92c0c735-b411-4f87-9dcd-4cf4565ba828" (UID: "92c0c735-b411-4f87-9dcd-4cf4565ba828"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.105872 4619 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/92c0c735-b411-4f87-9dcd-4cf4565ba828-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.106356 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92c0c735-b411-4f87-9dcd-4cf4565ba828-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.127703 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92c0c735-b411-4f87-9dcd-4cf4565ba828-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92c0c735-b411-4f87-9dcd-4cf4565ba828" (UID: "92c0c735-b411-4f87-9dcd-4cf4565ba828"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.207585 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92c0c735-b411-4f87-9dcd-4cf4565ba828-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.474657 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-659c4b6587-4stqp" event={"ID":"faba43f0-103d-43e7-9f3f-ef5be7ee8fe1","Type":"ContainerStarted","Data":"ef943c1de5898c759f5901cba79b840fd7a4b87605418df8f1c478c402a14fac"} Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.474714 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-659c4b6587-4stqp" event={"ID":"faba43f0-103d-43e7-9f3f-ef5be7ee8fe1","Type":"ContainerStarted","Data":"64a6a3f57baef0ebccb2318ca54e54d7bfe4de97fd9c069ba126abab135abcdf"} Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.476305 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5a4db787-7749-4a67-a52a-b8c4f3229c65","Type":"ContainerStarted","Data":"b3d59ec159e7421df48504c75fb270e09c7247d04b1c6ef486294c06af03973f"} Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.486281 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92c0c735-b411-4f87-9dcd-4cf4565ba828","Type":"ContainerDied","Data":"53600d32b1e43b5d44703a6e6e71cd62794d28d56b3b2420ff77365e33ffd365"} Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.486338 4619 scope.go:117] "RemoveContainer" containerID="c89f58dfa2810e46f9f747a195373f2f76c4c29b7503149c7927282f24d5d08b" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.486534 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.495220 4619 generic.go:334] "Generic (PLEG): container finished" podID="fc689870-cc3b-4d35-968a-78b787569209" containerID="8aa91cf4eb39ce34143f69a832c7b185190ef5c8466a9ab8a35cbc86e6566305" exitCode=0 Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.495374 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66768f896-nrg7c" event={"ID":"fc689870-cc3b-4d35-968a-78b787569209","Type":"ContainerDied","Data":"8aa91cf4eb39ce34143f69a832c7b185190ef5c8466a9ab8a35cbc86e6566305"} Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.508602 4619 scope.go:117] "RemoveContainer" containerID="292987db39c6d666bcd3e8631603d5bffb0eb8b0dc4a2f180ea3ac43034ee4bb" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.511110 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.936570882 podStartE2EDuration="15.511090361s" podCreationTimestamp="2026-01-26 11:14:15 +0000 UTC" firstStartedPulling="2026-01-26 11:14:16.385155985 +0000 UTC m=+1155.419196691" lastFinishedPulling="2026-01-26 11:14:28.959675464 +0000 UTC m=+1167.993716170" observedRunningTime="2026-01-26 11:14:30.502524844 +0000 UTC m=+1169.536565560" watchObservedRunningTime="2026-01-26 11:14:30.511090361 +0000 UTC m=+1169.545131067" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.548709 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.550848 4619 scope.go:117] "RemoveContainer" containerID="81ee215a36a20ef418ea92d25f84a906b42a936638f2f1ffbabacde241c5bd7d" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.554375 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.576917 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:14:30 crc kubenswrapper[4619]: E0126 11:14:30.577746 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c0c735-b411-4f87-9dcd-4cf4565ba828" containerName="proxy-httpd" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.577843 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c0c735-b411-4f87-9dcd-4cf4565ba828" containerName="proxy-httpd" Jan 26 11:14:30 crc kubenswrapper[4619]: E0126 11:14:30.577926 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c0c735-b411-4f87-9dcd-4cf4565ba828" containerName="ceilometer-notification-agent" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.577977 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c0c735-b411-4f87-9dcd-4cf4565ba828" containerName="ceilometer-notification-agent" Jan 26 11:14:30 crc kubenswrapper[4619]: E0126 11:14:30.578075 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c0c735-b411-4f87-9dcd-4cf4565ba828" containerName="sg-core" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.578126 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c0c735-b411-4f87-9dcd-4cf4565ba828" containerName="sg-core" Jan 26 11:14:30 crc kubenswrapper[4619]: E0126 11:14:30.578188 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c0c735-b411-4f87-9dcd-4cf4565ba828" containerName="ceilometer-central-agent" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.578236 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c0c735-b411-4f87-9dcd-4cf4565ba828" containerName="ceilometer-central-agent" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.578449 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="92c0c735-b411-4f87-9dcd-4cf4565ba828" containerName="ceilometer-notification-agent" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.578585 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="92c0c735-b411-4f87-9dcd-4cf4565ba828" containerName="ceilometer-central-agent" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.578663 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="92c0c735-b411-4f87-9dcd-4cf4565ba828" containerName="proxy-httpd" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.578729 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="92c0c735-b411-4f87-9dcd-4cf4565ba828" containerName="sg-core" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.579540 4619 scope.go:117] "RemoveContainer" containerID="54a926ce22438ccc8211f91636d4a6d6ca5eae6b7db0ebe9f5ef47864e4ad17c" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.588190 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.595734 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.596023 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.617174 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.717201 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc6b9696-037c-42b6-ac54-56e3c83d9f99-config-data\") pod \"ceilometer-0\" (UID: \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\") " pod="openstack/ceilometer-0" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.717241 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc6b9696-037c-42b6-ac54-56e3c83d9f99-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\") " pod="openstack/ceilometer-0" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.717325 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc6b9696-037c-42b6-ac54-56e3c83d9f99-run-httpd\") pod \"ceilometer-0\" (UID: \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\") " pod="openstack/ceilometer-0" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.717363 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc6b9696-037c-42b6-ac54-56e3c83d9f99-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\") " pod="openstack/ceilometer-0" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.717409 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7l9t\" (UniqueName: \"kubernetes.io/projected/cc6b9696-037c-42b6-ac54-56e3c83d9f99-kube-api-access-t7l9t\") pod \"ceilometer-0\" (UID: \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\") " pod="openstack/ceilometer-0" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.717444 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc6b9696-037c-42b6-ac54-56e3c83d9f99-log-httpd\") pod \"ceilometer-0\" (UID: \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\") " pod="openstack/ceilometer-0" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.717515 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc6b9696-037c-42b6-ac54-56e3c83d9f99-scripts\") pod \"ceilometer-0\" (UID: \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\") " pod="openstack/ceilometer-0" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.818856 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc6b9696-037c-42b6-ac54-56e3c83d9f99-config-data\") pod \"ceilometer-0\" (UID: \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\") " pod="openstack/ceilometer-0" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.818896 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc6b9696-037c-42b6-ac54-56e3c83d9f99-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\") " pod="openstack/ceilometer-0" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.818963 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc6b9696-037c-42b6-ac54-56e3c83d9f99-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\") " pod="openstack/ceilometer-0" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.818977 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc6b9696-037c-42b6-ac54-56e3c83d9f99-run-httpd\") pod \"ceilometer-0\" (UID: \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\") " pod="openstack/ceilometer-0" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.819018 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7l9t\" (UniqueName: \"kubernetes.io/projected/cc6b9696-037c-42b6-ac54-56e3c83d9f99-kube-api-access-t7l9t\") pod \"ceilometer-0\" (UID: \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\") " pod="openstack/ceilometer-0" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.819035 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc6b9696-037c-42b6-ac54-56e3c83d9f99-log-httpd\") pod \"ceilometer-0\" (UID: \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\") " pod="openstack/ceilometer-0" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.819087 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc6b9696-037c-42b6-ac54-56e3c83d9f99-scripts\") pod \"ceilometer-0\" (UID: \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\") " pod="openstack/ceilometer-0" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.819865 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc6b9696-037c-42b6-ac54-56e3c83d9f99-run-httpd\") pod \"ceilometer-0\" (UID: \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\") " pod="openstack/ceilometer-0" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.820152 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc6b9696-037c-42b6-ac54-56e3c83d9f99-log-httpd\") pod \"ceilometer-0\" (UID: \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\") " pod="openstack/ceilometer-0" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.825048 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc6b9696-037c-42b6-ac54-56e3c83d9f99-scripts\") pod \"ceilometer-0\" (UID: \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\") " pod="openstack/ceilometer-0" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.825452 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc6b9696-037c-42b6-ac54-56e3c83d9f99-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\") " pod="openstack/ceilometer-0" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.826262 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc6b9696-037c-42b6-ac54-56e3c83d9f99-config-data\") pod \"ceilometer-0\" (UID: \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\") " pod="openstack/ceilometer-0" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.826591 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc6b9696-037c-42b6-ac54-56e3c83d9f99-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\") " pod="openstack/ceilometer-0" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.840156 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7l9t\" (UniqueName: \"kubernetes.io/projected/cc6b9696-037c-42b6-ac54-56e3c83d9f99-kube-api-access-t7l9t\") pod \"ceilometer-0\" (UID: \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\") " pod="openstack/ceilometer-0" Jan 26 11:14:30 crc kubenswrapper[4619]: I0126 11:14:30.918848 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:14:31 crc kubenswrapper[4619]: I0126 11:14:31.273294 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92c0c735-b411-4f87-9dcd-4cf4565ba828" path="/var/lib/kubelet/pods/92c0c735-b411-4f87-9dcd-4cf4565ba828/volumes" Jan 26 11:14:31 crc kubenswrapper[4619]: I0126 11:14:31.435591 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:14:31 crc kubenswrapper[4619]: I0126 11:14:31.478941 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:14:31 crc kubenswrapper[4619]: I0126 11:14:31.480110 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1636ba60-de14-4281-aa49-417f7808ccd9" containerName="glance-log" containerID="cri-o://60cf38830dc90720683823c7a047291b0c15ebaba4a85f20552240ab4c94a454" gracePeriod=30 Jan 26 11:14:31 crc kubenswrapper[4619]: I0126 11:14:31.480240 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1636ba60-de14-4281-aa49-417f7808ccd9" containerName="glance-httpd" containerID="cri-o://b04ba2db6303a3e986a00acb703a7c20212ca7681a0d9ac6544f8f12ae68f5a5" gracePeriod=30 Jan 26 11:14:31 crc kubenswrapper[4619]: I0126 11:14:31.506584 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-659c4b6587-4stqp" event={"ID":"faba43f0-103d-43e7-9f3f-ef5be7ee8fe1","Type":"ContainerStarted","Data":"b9641a521bf92405161ee899b6c89211e41ef063d9141e60bcf30cc8a8acc8e9"} Jan 26 11:14:31 crc kubenswrapper[4619]: I0126 11:14:31.506675 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-659c4b6587-4stqp" Jan 26 11:14:31 crc kubenswrapper[4619]: I0126 11:14:31.506689 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-659c4b6587-4stqp" Jan 26 11:14:31 crc kubenswrapper[4619]: I0126 11:14:31.507847 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc6b9696-037c-42b6-ac54-56e3c83d9f99","Type":"ContainerStarted","Data":"728f9bbad4e48aec39a8d431818e37e5d7c0181238b94a5043a479aa7a9fab71"} Jan 26 11:14:31 crc kubenswrapper[4619]: I0126 11:14:31.559763 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-659c4b6587-4stqp" podStartSLOduration=9.559741561 podStartE2EDuration="9.559741561s" podCreationTimestamp="2026-01-26 11:14:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:14:31.535101608 +0000 UTC m=+1170.569142324" watchObservedRunningTime="2026-01-26 11:14:31.559741561 +0000 UTC m=+1170.593782277" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.468177 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.559184 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7rx5\" (UniqueName: \"kubernetes.io/projected/6838e991-4c5f-4e90-8cd7-2c92dff641fe-kube-api-access-n7rx5\") pod \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.559225 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.559254 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6838e991-4c5f-4e90-8cd7-2c92dff641fe-combined-ca-bundle\") pod \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.559357 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6838e991-4c5f-4e90-8cd7-2c92dff641fe-public-tls-certs\") pod \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.559406 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6838e991-4c5f-4e90-8cd7-2c92dff641fe-httpd-run\") pod \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.559525 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6838e991-4c5f-4e90-8cd7-2c92dff641fe-scripts\") pod \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.559548 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6838e991-4c5f-4e90-8cd7-2c92dff641fe-config-data\") pod \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.559577 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6838e991-4c5f-4e90-8cd7-2c92dff641fe-logs\") pod \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\" (UID: \"6838e991-4c5f-4e90-8cd7-2c92dff641fe\") " Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.567079 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6838e991-4c5f-4e90-8cd7-2c92dff641fe-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6838e991-4c5f-4e90-8cd7-2c92dff641fe" (UID: "6838e991-4c5f-4e90-8cd7-2c92dff641fe"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.575368 4619 generic.go:334] "Generic (PLEG): container finished" podID="1636ba60-de14-4281-aa49-417f7808ccd9" containerID="60cf38830dc90720683823c7a047291b0c15ebaba4a85f20552240ab4c94a454" exitCode=143 Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.575454 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1636ba60-de14-4281-aa49-417f7808ccd9","Type":"ContainerDied","Data":"60cf38830dc90720683823c7a047291b0c15ebaba4a85f20552240ab4c94a454"} Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.578883 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6838e991-4c5f-4e90-8cd7-2c92dff641fe-logs" (OuterVolumeSpecName: "logs") pod "6838e991-4c5f-4e90-8cd7-2c92dff641fe" (UID: "6838e991-4c5f-4e90-8cd7-2c92dff641fe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.588756 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6838e991-4c5f-4e90-8cd7-2c92dff641fe-scripts" (OuterVolumeSpecName: "scripts") pod "6838e991-4c5f-4e90-8cd7-2c92dff641fe" (UID: "6838e991-4c5f-4e90-8cd7-2c92dff641fe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.599222 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6838e991-4c5f-4e90-8cd7-2c92dff641fe-kube-api-access-n7rx5" (OuterVolumeSpecName: "kube-api-access-n7rx5") pod "6838e991-4c5f-4e90-8cd7-2c92dff641fe" (UID: "6838e991-4c5f-4e90-8cd7-2c92dff641fe"). InnerVolumeSpecName "kube-api-access-n7rx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.599302 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "6838e991-4c5f-4e90-8cd7-2c92dff641fe" (UID: "6838e991-4c5f-4e90-8cd7-2c92dff641fe"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.604800 4619 generic.go:334] "Generic (PLEG): container finished" podID="fc689870-cc3b-4d35-968a-78b787569209" containerID="880ffa035f4f1105fb675a7b032ff779645c85f1b2469b696eae34ade35b1fb6" exitCode=0 Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.604878 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66768f896-nrg7c" event={"ID":"fc689870-cc3b-4d35-968a-78b787569209","Type":"ContainerDied","Data":"880ffa035f4f1105fb675a7b032ff779645c85f1b2469b696eae34ade35b1fb6"} Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.653894 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc6b9696-037c-42b6-ac54-56e3c83d9f99","Type":"ContainerStarted","Data":"9ec12891e1dc7ccff340f3fbfea21873c1d5b31aee0c3022ec1f299b3c5c5228"} Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.661713 4619 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6838e991-4c5f-4e90-8cd7-2c92dff641fe-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.661747 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7rx5\" (UniqueName: \"kubernetes.io/projected/6838e991-4c5f-4e90-8cd7-2c92dff641fe-kube-api-access-n7rx5\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.661784 4619 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.661794 4619 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6838e991-4c5f-4e90-8cd7-2c92dff641fe-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.661802 4619 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6838e991-4c5f-4e90-8cd7-2c92dff641fe-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.665188 4619 generic.go:334] "Generic (PLEG): container finished" podID="6838e991-4c5f-4e90-8cd7-2c92dff641fe" containerID="6972a4cfab72cbad67ea69cd51c2eac00e4386665a4c7648b3d1750a1f4b8a5c" exitCode=0 Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.665740 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.666278 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6838e991-4c5f-4e90-8cd7-2c92dff641fe","Type":"ContainerDied","Data":"6972a4cfab72cbad67ea69cd51c2eac00e4386665a4c7648b3d1750a1f4b8a5c"} Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.666305 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6838e991-4c5f-4e90-8cd7-2c92dff641fe","Type":"ContainerDied","Data":"6ed494a56105879bb0c98c9bb1dc69f2e132d6a3178fbc325b37a300b665b7e2"} Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.666321 4619 scope.go:117] "RemoveContainer" containerID="6972a4cfab72cbad67ea69cd51c2eac00e4386665a4c7648b3d1750a1f4b8a5c" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.710820 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6838e991-4c5f-4e90-8cd7-2c92dff641fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6838e991-4c5f-4e90-8cd7-2c92dff641fe" (UID: "6838e991-4c5f-4e90-8cd7-2c92dff641fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.735970 4619 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.748484 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66768f896-nrg7c" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.759492 4619 scope.go:117] "RemoveContainer" containerID="2087dbd274c5e46fa8efc93bbb6bce0f418053bf84f1a79f9f06ee34b486af9b" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.781731 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6838e991-4c5f-4e90-8cd7-2c92dff641fe-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6838e991-4c5f-4e90-8cd7-2c92dff641fe" (UID: "6838e991-4c5f-4e90-8cd7-2c92dff641fe"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.782528 4619 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.782551 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6838e991-4c5f-4e90-8cd7-2c92dff641fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.782561 4619 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6838e991-4c5f-4e90-8cd7-2c92dff641fe-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.784741 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6838e991-4c5f-4e90-8cd7-2c92dff641fe-config-data" (OuterVolumeSpecName: "config-data") pod "6838e991-4c5f-4e90-8cd7-2c92dff641fe" (UID: "6838e991-4c5f-4e90-8cd7-2c92dff641fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.800963 4619 scope.go:117] "RemoveContainer" containerID="6972a4cfab72cbad67ea69cd51c2eac00e4386665a4c7648b3d1750a1f4b8a5c" Jan 26 11:14:32 crc kubenswrapper[4619]: E0126 11:14:32.801569 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6972a4cfab72cbad67ea69cd51c2eac00e4386665a4c7648b3d1750a1f4b8a5c\": container with ID starting with 6972a4cfab72cbad67ea69cd51c2eac00e4386665a4c7648b3d1750a1f4b8a5c not found: ID does not exist" containerID="6972a4cfab72cbad67ea69cd51c2eac00e4386665a4c7648b3d1750a1f4b8a5c" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.801608 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6972a4cfab72cbad67ea69cd51c2eac00e4386665a4c7648b3d1750a1f4b8a5c"} err="failed to get container status \"6972a4cfab72cbad67ea69cd51c2eac00e4386665a4c7648b3d1750a1f4b8a5c\": rpc error: code = NotFound desc = could not find container \"6972a4cfab72cbad67ea69cd51c2eac00e4386665a4c7648b3d1750a1f4b8a5c\": container with ID starting with 6972a4cfab72cbad67ea69cd51c2eac00e4386665a4c7648b3d1750a1f4b8a5c not found: ID does not exist" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.801663 4619 scope.go:117] "RemoveContainer" containerID="2087dbd274c5e46fa8efc93bbb6bce0f418053bf84f1a79f9f06ee34b486af9b" Jan 26 11:14:32 crc kubenswrapper[4619]: E0126 11:14:32.802797 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2087dbd274c5e46fa8efc93bbb6bce0f418053bf84f1a79f9f06ee34b486af9b\": container with ID starting with 2087dbd274c5e46fa8efc93bbb6bce0f418053bf84f1a79f9f06ee34b486af9b not found: ID does not exist" containerID="2087dbd274c5e46fa8efc93bbb6bce0f418053bf84f1a79f9f06ee34b486af9b" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.802829 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2087dbd274c5e46fa8efc93bbb6bce0f418053bf84f1a79f9f06ee34b486af9b"} err="failed to get container status \"2087dbd274c5e46fa8efc93bbb6bce0f418053bf84f1a79f9f06ee34b486af9b\": rpc error: code = NotFound desc = could not find container \"2087dbd274c5e46fa8efc93bbb6bce0f418053bf84f1a79f9f06ee34b486af9b\": container with ID starting with 2087dbd274c5e46fa8efc93bbb6bce0f418053bf84f1a79f9f06ee34b486af9b not found: ID does not exist" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.886586 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc689870-cc3b-4d35-968a-78b787569209-combined-ca-bundle\") pod \"fc689870-cc3b-4d35-968a-78b787569209\" (UID: \"fc689870-cc3b-4d35-968a-78b787569209\") " Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.886692 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc689870-cc3b-4d35-968a-78b787569209-ovndb-tls-certs\") pod \"fc689870-cc3b-4d35-968a-78b787569209\" (UID: \"fc689870-cc3b-4d35-968a-78b787569209\") " Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.886752 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz2dz\" (UniqueName: \"kubernetes.io/projected/fc689870-cc3b-4d35-968a-78b787569209-kube-api-access-zz2dz\") pod \"fc689870-cc3b-4d35-968a-78b787569209\" (UID: \"fc689870-cc3b-4d35-968a-78b787569209\") " Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.886879 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc689870-cc3b-4d35-968a-78b787569209-config\") pod \"fc689870-cc3b-4d35-968a-78b787569209\" (UID: \"fc689870-cc3b-4d35-968a-78b787569209\") " Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.886959 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fc689870-cc3b-4d35-968a-78b787569209-httpd-config\") pod \"fc689870-cc3b-4d35-968a-78b787569209\" (UID: \"fc689870-cc3b-4d35-968a-78b787569209\") " Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.887288 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6838e991-4c5f-4e90-8cd7-2c92dff641fe-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.890809 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc689870-cc3b-4d35-968a-78b787569209-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "fc689870-cc3b-4d35-968a-78b787569209" (UID: "fc689870-cc3b-4d35-968a-78b787569209"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.899843 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc689870-cc3b-4d35-968a-78b787569209-kube-api-access-zz2dz" (OuterVolumeSpecName: "kube-api-access-zz2dz") pod "fc689870-cc3b-4d35-968a-78b787569209" (UID: "fc689870-cc3b-4d35-968a-78b787569209"). InnerVolumeSpecName "kube-api-access-zz2dz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.987031 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc689870-cc3b-4d35-968a-78b787569209-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc689870-cc3b-4d35-968a-78b787569209" (UID: "fc689870-cc3b-4d35-968a-78b787569209"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.988583 4619 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fc689870-cc3b-4d35-968a-78b787569209-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.988651 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc689870-cc3b-4d35-968a-78b787569209-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.988668 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz2dz\" (UniqueName: \"kubernetes.io/projected/fc689870-cc3b-4d35-968a-78b787569209-kube-api-access-zz2dz\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.992916 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc689870-cc3b-4d35-968a-78b787569209-config" (OuterVolumeSpecName: "config") pod "fc689870-cc3b-4d35-968a-78b787569209" (UID: "fc689870-cc3b-4d35-968a-78b787569209"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:32 crc kubenswrapper[4619]: I0126 11:14:32.996325 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc689870-cc3b-4d35-968a-78b787569209-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "fc689870-cc3b-4d35-968a-78b787569209" (UID: "fc689870-cc3b-4d35-968a-78b787569209"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.089746 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc689870-cc3b-4d35-968a-78b787569209-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.090072 4619 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc689870-cc3b-4d35-968a-78b787569209-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.098637 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.117881 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.144206 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:14:33 crc kubenswrapper[4619]: E0126 11:14:33.144598 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc689870-cc3b-4d35-968a-78b787569209" containerName="neutron-api" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.144637 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc689870-cc3b-4d35-968a-78b787569209" containerName="neutron-api" Jan 26 11:14:33 crc kubenswrapper[4619]: E0126 11:14:33.144658 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc689870-cc3b-4d35-968a-78b787569209" containerName="neutron-httpd" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.144664 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc689870-cc3b-4d35-968a-78b787569209" containerName="neutron-httpd" Jan 26 11:14:33 crc kubenswrapper[4619]: E0126 11:14:33.144675 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6838e991-4c5f-4e90-8cd7-2c92dff641fe" containerName="glance-log" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.144681 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="6838e991-4c5f-4e90-8cd7-2c92dff641fe" containerName="glance-log" Jan 26 11:14:33 crc kubenswrapper[4619]: E0126 11:14:33.144718 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6838e991-4c5f-4e90-8cd7-2c92dff641fe" containerName="glance-httpd" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.144725 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="6838e991-4c5f-4e90-8cd7-2c92dff641fe" containerName="glance-httpd" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.144913 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc689870-cc3b-4d35-968a-78b787569209" containerName="neutron-api" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.144953 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="6838e991-4c5f-4e90-8cd7-2c92dff641fe" containerName="glance-httpd" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.144962 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc689870-cc3b-4d35-968a-78b787569209" containerName="neutron-httpd" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.144977 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="6838e991-4c5f-4e90-8cd7-2c92dff641fe" containerName="glance-log" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.146366 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.156313 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.156478 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.181697 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.191246 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzhqc\" (UniqueName: \"kubernetes.io/projected/27d84c05-55fb-4f3a-a363-aa137f111de7-kube-api-access-kzhqc\") pod \"glance-default-external-api-0\" (UID: \"27d84c05-55fb-4f3a-a363-aa137f111de7\") " pod="openstack/glance-default-external-api-0" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.191310 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27d84c05-55fb-4f3a-a363-aa137f111de7-scripts\") pod \"glance-default-external-api-0\" (UID: \"27d84c05-55fb-4f3a-a363-aa137f111de7\") " pod="openstack/glance-default-external-api-0" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.191358 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"27d84c05-55fb-4f3a-a363-aa137f111de7\") " pod="openstack/glance-default-external-api-0" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.191422 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27d84c05-55fb-4f3a-a363-aa137f111de7-logs\") pod \"glance-default-external-api-0\" (UID: \"27d84c05-55fb-4f3a-a363-aa137f111de7\") " pod="openstack/glance-default-external-api-0" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.191465 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/27d84c05-55fb-4f3a-a363-aa137f111de7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"27d84c05-55fb-4f3a-a363-aa137f111de7\") " pod="openstack/glance-default-external-api-0" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.191489 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27d84c05-55fb-4f3a-a363-aa137f111de7-config-data\") pod \"glance-default-external-api-0\" (UID: \"27d84c05-55fb-4f3a-a363-aa137f111de7\") " pod="openstack/glance-default-external-api-0" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.191505 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/27d84c05-55fb-4f3a-a363-aa137f111de7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"27d84c05-55fb-4f3a-a363-aa137f111de7\") " pod="openstack/glance-default-external-api-0" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.191543 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27d84c05-55fb-4f3a-a363-aa137f111de7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"27d84c05-55fb-4f3a-a363-aa137f111de7\") " pod="openstack/glance-default-external-api-0" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.270965 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6838e991-4c5f-4e90-8cd7-2c92dff641fe" path="/var/lib/kubelet/pods/6838e991-4c5f-4e90-8cd7-2c92dff641fe/volumes" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.293424 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27d84c05-55fb-4f3a-a363-aa137f111de7-logs\") pod \"glance-default-external-api-0\" (UID: \"27d84c05-55fb-4f3a-a363-aa137f111de7\") " pod="openstack/glance-default-external-api-0" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.293512 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/27d84c05-55fb-4f3a-a363-aa137f111de7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"27d84c05-55fb-4f3a-a363-aa137f111de7\") " pod="openstack/glance-default-external-api-0" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.293551 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27d84c05-55fb-4f3a-a363-aa137f111de7-config-data\") pod \"glance-default-external-api-0\" (UID: \"27d84c05-55fb-4f3a-a363-aa137f111de7\") " pod="openstack/glance-default-external-api-0" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.293572 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/27d84c05-55fb-4f3a-a363-aa137f111de7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"27d84c05-55fb-4f3a-a363-aa137f111de7\") " pod="openstack/glance-default-external-api-0" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.293595 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27d84c05-55fb-4f3a-a363-aa137f111de7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"27d84c05-55fb-4f3a-a363-aa137f111de7\") " pod="openstack/glance-default-external-api-0" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.293656 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzhqc\" (UniqueName: \"kubernetes.io/projected/27d84c05-55fb-4f3a-a363-aa137f111de7-kube-api-access-kzhqc\") pod \"glance-default-external-api-0\" (UID: \"27d84c05-55fb-4f3a-a363-aa137f111de7\") " pod="openstack/glance-default-external-api-0" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.293686 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27d84c05-55fb-4f3a-a363-aa137f111de7-scripts\") pod \"glance-default-external-api-0\" (UID: \"27d84c05-55fb-4f3a-a363-aa137f111de7\") " pod="openstack/glance-default-external-api-0" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.293728 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"27d84c05-55fb-4f3a-a363-aa137f111de7\") " pod="openstack/glance-default-external-api-0" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.294000 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27d84c05-55fb-4f3a-a363-aa137f111de7-logs\") pod \"glance-default-external-api-0\" (UID: \"27d84c05-55fb-4f3a-a363-aa137f111de7\") " pod="openstack/glance-default-external-api-0" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.294005 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/27d84c05-55fb-4f3a-a363-aa137f111de7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"27d84c05-55fb-4f3a-a363-aa137f111de7\") " pod="openstack/glance-default-external-api-0" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.294657 4619 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"27d84c05-55fb-4f3a-a363-aa137f111de7\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.300138 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27d84c05-55fb-4f3a-a363-aa137f111de7-config-data\") pod \"glance-default-external-api-0\" (UID: \"27d84c05-55fb-4f3a-a363-aa137f111de7\") " pod="openstack/glance-default-external-api-0" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.301301 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/27d84c05-55fb-4f3a-a363-aa137f111de7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"27d84c05-55fb-4f3a-a363-aa137f111de7\") " pod="openstack/glance-default-external-api-0" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.308405 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27d84c05-55fb-4f3a-a363-aa137f111de7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"27d84c05-55fb-4f3a-a363-aa137f111de7\") " pod="openstack/glance-default-external-api-0" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.310009 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27d84c05-55fb-4f3a-a363-aa137f111de7-scripts\") pod \"glance-default-external-api-0\" (UID: \"27d84c05-55fb-4f3a-a363-aa137f111de7\") " pod="openstack/glance-default-external-api-0" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.313151 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzhqc\" (UniqueName: \"kubernetes.io/projected/27d84c05-55fb-4f3a-a363-aa137f111de7-kube-api-access-kzhqc\") pod \"glance-default-external-api-0\" (UID: \"27d84c05-55fb-4f3a-a363-aa137f111de7\") " pod="openstack/glance-default-external-api-0" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.324710 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"27d84c05-55fb-4f3a-a363-aa137f111de7\") " pod="openstack/glance-default-external-api-0" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.525405 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.694782 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66768f896-nrg7c" event={"ID":"fc689870-cc3b-4d35-968a-78b787569209","Type":"ContainerDied","Data":"c17b72b668659ba38daf38c72b4480bdb58505522772f2ef14d4461b7fdc3966"} Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.695156 4619 scope.go:117] "RemoveContainer" containerID="8aa91cf4eb39ce34143f69a832c7b185190ef5c8466a9ab8a35cbc86e6566305" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.695259 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66768f896-nrg7c" Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.703964 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc6b9696-037c-42b6-ac54-56e3c83d9f99","Type":"ContainerStarted","Data":"ccd004efee83a06531eea3cc9b0eac55daf85e7d1bbf045a97169d622acbb84c"} Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.741258 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-66768f896-nrg7c"] Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.759537 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-66768f896-nrg7c"] Jan 26 11:14:33 crc kubenswrapper[4619]: I0126 11:14:33.763322 4619 scope.go:117] "RemoveContainer" containerID="880ffa035f4f1105fb675a7b032ff779645c85f1b2469b696eae34ade35b1fb6" Jan 26 11:14:34 crc kubenswrapper[4619]: I0126 11:14:34.174947 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:14:34 crc kubenswrapper[4619]: I0126 11:14:34.736786 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"27d84c05-55fb-4f3a-a363-aa137f111de7","Type":"ContainerStarted","Data":"659c20a2ea151e49065717353aa8d37c99ebb9f674bc80ed6292d733257293d2"} Jan 26 11:14:34 crc kubenswrapper[4619]: I0126 11:14:34.737124 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"27d84c05-55fb-4f3a-a363-aa137f111de7","Type":"ContainerStarted","Data":"ffe0bec93d874905661af320e3673d1a0893b96ad13ab3d141b646cab8079446"} Jan 26 11:14:34 crc kubenswrapper[4619]: I0126 11:14:34.751290 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc6b9696-037c-42b6-ac54-56e3c83d9f99","Type":"ContainerStarted","Data":"d330680426fb88e2c264315e5ae6e505c7fb55426c760b7dad12ec1e9d6d291c"} Jan 26 11:14:34 crc kubenswrapper[4619]: I0126 11:14:34.888236 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.108514 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6f67c775d4-7ls4r" podUID="670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.274858 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc689870-cc3b-4d35-968a-78b787569209" path="/var/lib/kubelet/pods/fc689870-cc3b-4d35-968a-78b787569209/volumes" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.306653 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-846d64d6c4-66jvl" podUID="10c8ed10-dab5-49e5-a030-4be99c720ae0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.548907 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.646378 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1636ba60-de14-4281-aa49-417f7808ccd9-scripts\") pod \"1636ba60-de14-4281-aa49-417f7808ccd9\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.646461 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn4jj\" (UniqueName: \"kubernetes.io/projected/1636ba60-de14-4281-aa49-417f7808ccd9-kube-api-access-pn4jj\") pod \"1636ba60-de14-4281-aa49-417f7808ccd9\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.646493 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1636ba60-de14-4281-aa49-417f7808ccd9-httpd-run\") pod \"1636ba60-de14-4281-aa49-417f7808ccd9\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.646550 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"1636ba60-de14-4281-aa49-417f7808ccd9\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.646630 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1636ba60-de14-4281-aa49-417f7808ccd9-combined-ca-bundle\") pod \"1636ba60-de14-4281-aa49-417f7808ccd9\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.646666 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1636ba60-de14-4281-aa49-417f7808ccd9-internal-tls-certs\") pod \"1636ba60-de14-4281-aa49-417f7808ccd9\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.646731 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1636ba60-de14-4281-aa49-417f7808ccd9-config-data\") pod \"1636ba60-de14-4281-aa49-417f7808ccd9\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.646794 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1636ba60-de14-4281-aa49-417f7808ccd9-logs\") pod \"1636ba60-de14-4281-aa49-417f7808ccd9\" (UID: \"1636ba60-de14-4281-aa49-417f7808ccd9\") " Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.647180 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1636ba60-de14-4281-aa49-417f7808ccd9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1636ba60-de14-4281-aa49-417f7808ccd9" (UID: "1636ba60-de14-4281-aa49-417f7808ccd9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.647387 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1636ba60-de14-4281-aa49-417f7808ccd9-logs" (OuterVolumeSpecName: "logs") pod "1636ba60-de14-4281-aa49-417f7808ccd9" (UID: "1636ba60-de14-4281-aa49-417f7808ccd9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.657942 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "1636ba60-de14-4281-aa49-417f7808ccd9" (UID: "1636ba60-de14-4281-aa49-417f7808ccd9"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.664749 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1636ba60-de14-4281-aa49-417f7808ccd9-kube-api-access-pn4jj" (OuterVolumeSpecName: "kube-api-access-pn4jj") pod "1636ba60-de14-4281-aa49-417f7808ccd9" (UID: "1636ba60-de14-4281-aa49-417f7808ccd9"). InnerVolumeSpecName "kube-api-access-pn4jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.664776 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1636ba60-de14-4281-aa49-417f7808ccd9-scripts" (OuterVolumeSpecName: "scripts") pod "1636ba60-de14-4281-aa49-417f7808ccd9" (UID: "1636ba60-de14-4281-aa49-417f7808ccd9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.717534 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1636ba60-de14-4281-aa49-417f7808ccd9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1636ba60-de14-4281-aa49-417f7808ccd9" (UID: "1636ba60-de14-4281-aa49-417f7808ccd9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.749850 4619 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1636ba60-de14-4281-aa49-417f7808ccd9-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.751086 4619 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1636ba60-de14-4281-aa49-417f7808ccd9-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.751160 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn4jj\" (UniqueName: \"kubernetes.io/projected/1636ba60-de14-4281-aa49-417f7808ccd9-kube-api-access-pn4jj\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.751244 4619 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1636ba60-de14-4281-aa49-417f7808ccd9-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.751331 4619 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.751406 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1636ba60-de14-4281-aa49-417f7808ccd9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.763675 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1636ba60-de14-4281-aa49-417f7808ccd9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1636ba60-de14-4281-aa49-417f7808ccd9" (UID: "1636ba60-de14-4281-aa49-417f7808ccd9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.766755 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1636ba60-de14-4281-aa49-417f7808ccd9-config-data" (OuterVolumeSpecName: "config-data") pod "1636ba60-de14-4281-aa49-417f7808ccd9" (UID: "1636ba60-de14-4281-aa49-417f7808ccd9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.767112 4619 generic.go:334] "Generic (PLEG): container finished" podID="1636ba60-de14-4281-aa49-417f7808ccd9" containerID="b04ba2db6303a3e986a00acb703a7c20212ca7681a0d9ac6544f8f12ae68f5a5" exitCode=0 Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.767174 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1636ba60-de14-4281-aa49-417f7808ccd9","Type":"ContainerDied","Data":"b04ba2db6303a3e986a00acb703a7c20212ca7681a0d9ac6544f8f12ae68f5a5"} Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.767200 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1636ba60-de14-4281-aa49-417f7808ccd9","Type":"ContainerDied","Data":"262aa5a31759bf007a83ec9ffdeb36caea0d08c02970e4631d7cab7f9018c434"} Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.767218 4619 scope.go:117] "RemoveContainer" containerID="b04ba2db6303a3e986a00acb703a7c20212ca7681a0d9ac6544f8f12ae68f5a5" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.767326 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.774012 4619 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.776952 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"27d84c05-55fb-4f3a-a363-aa137f111de7","Type":"ContainerStarted","Data":"e0869d683959e914cd8ef3d1a82cc2a1d7c3192d54a8bdf5b38bb6b5c1bcfce6"} Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.787092 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc6b9696-037c-42b6-ac54-56e3c83d9f99","Type":"ContainerStarted","Data":"27862d5cc233514a0d884f5f65207caa2484ef5c00e3e4ae278ad8b025e72610"} Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.787277 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc6b9696-037c-42b6-ac54-56e3c83d9f99" containerName="ceilometer-central-agent" containerID="cri-o://9ec12891e1dc7ccff340f3fbfea21873c1d5b31aee0c3022ec1f299b3c5c5228" gracePeriod=30 Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.787411 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.787459 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc6b9696-037c-42b6-ac54-56e3c83d9f99" containerName="sg-core" containerID="cri-o://d330680426fb88e2c264315e5ae6e505c7fb55426c760b7dad12ec1e9d6d291c" gracePeriod=30 Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.787497 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc6b9696-037c-42b6-ac54-56e3c83d9f99" containerName="ceilometer-notification-agent" containerID="cri-o://ccd004efee83a06531eea3cc9b0eac55daf85e7d1bbf045a97169d622acbb84c" gracePeriod=30 Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.787511 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc6b9696-037c-42b6-ac54-56e3c83d9f99" containerName="proxy-httpd" containerID="cri-o://27862d5cc233514a0d884f5f65207caa2484ef5c00e3e4ae278ad8b025e72610" gracePeriod=30 Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.806785 4619 scope.go:117] "RemoveContainer" containerID="60cf38830dc90720683823c7a047291b0c15ebaba4a85f20552240ab4c94a454" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.817931 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=2.81791136 podStartE2EDuration="2.81791136s" podCreationTimestamp="2026-01-26 11:14:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:14:35.798292537 +0000 UTC m=+1174.832333253" watchObservedRunningTime="2026-01-26 11:14:35.81791136 +0000 UTC m=+1174.851952076" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.835884 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.844862 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.870714 4619 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.870891 4619 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1636ba60-de14-4281-aa49-417f7808ccd9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.870997 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1636ba60-de14-4281-aa49-417f7808ccd9-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.883308 4619 scope.go:117] "RemoveContainer" containerID="b04ba2db6303a3e986a00acb703a7c20212ca7681a0d9ac6544f8f12ae68f5a5" Jan 26 11:14:35 crc kubenswrapper[4619]: E0126 11:14:35.883637 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b04ba2db6303a3e986a00acb703a7c20212ca7681a0d9ac6544f8f12ae68f5a5\": container with ID starting with b04ba2db6303a3e986a00acb703a7c20212ca7681a0d9ac6544f8f12ae68f5a5 not found: ID does not exist" containerID="b04ba2db6303a3e986a00acb703a7c20212ca7681a0d9ac6544f8f12ae68f5a5" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.883666 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b04ba2db6303a3e986a00acb703a7c20212ca7681a0d9ac6544f8f12ae68f5a5"} err="failed to get container status \"b04ba2db6303a3e986a00acb703a7c20212ca7681a0d9ac6544f8f12ae68f5a5\": rpc error: code = NotFound desc = could not find container \"b04ba2db6303a3e986a00acb703a7c20212ca7681a0d9ac6544f8f12ae68f5a5\": container with ID starting with b04ba2db6303a3e986a00acb703a7c20212ca7681a0d9ac6544f8f12ae68f5a5 not found: ID does not exist" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.883684 4619 scope.go:117] "RemoveContainer" containerID="60cf38830dc90720683823c7a047291b0c15ebaba4a85f20552240ab4c94a454" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.886010 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:14:35 crc kubenswrapper[4619]: E0126 11:14:35.888147 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60cf38830dc90720683823c7a047291b0c15ebaba4a85f20552240ab4c94a454\": container with ID starting with 60cf38830dc90720683823c7a047291b0c15ebaba4a85f20552240ab4c94a454 not found: ID does not exist" containerID="60cf38830dc90720683823c7a047291b0c15ebaba4a85f20552240ab4c94a454" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.888176 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60cf38830dc90720683823c7a047291b0c15ebaba4a85f20552240ab4c94a454"} err="failed to get container status \"60cf38830dc90720683823c7a047291b0c15ebaba4a85f20552240ab4c94a454\": rpc error: code = NotFound desc = could not find container \"60cf38830dc90720683823c7a047291b0c15ebaba4a85f20552240ab4c94a454\": container with ID starting with 60cf38830dc90720683823c7a047291b0c15ebaba4a85f20552240ab4c94a454 not found: ID does not exist" Jan 26 11:14:35 crc kubenswrapper[4619]: E0126 11:14:35.888716 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1636ba60-de14-4281-aa49-417f7808ccd9" containerName="glance-log" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.888738 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="1636ba60-de14-4281-aa49-417f7808ccd9" containerName="glance-log" Jan 26 11:14:35 crc kubenswrapper[4619]: E0126 11:14:35.888836 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1636ba60-de14-4281-aa49-417f7808ccd9" containerName="glance-httpd" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.888845 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="1636ba60-de14-4281-aa49-417f7808ccd9" containerName="glance-httpd" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.892846 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="1636ba60-de14-4281-aa49-417f7808ccd9" containerName="glance-log" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.892907 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="1636ba60-de14-4281-aa49-417f7808ccd9" containerName="glance-httpd" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.893884 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.898970 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.899160 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.947482 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.480334674 podStartE2EDuration="5.9474627s" podCreationTimestamp="2026-01-26 11:14:30 +0000 UTC" firstStartedPulling="2026-01-26 11:14:31.434859332 +0000 UTC m=+1170.468900038" lastFinishedPulling="2026-01-26 11:14:34.901987348 +0000 UTC m=+1173.936028064" observedRunningTime="2026-01-26 11:14:35.868994115 +0000 UTC m=+1174.903034821" watchObservedRunningTime="2026-01-26 11:14:35.9474627 +0000 UTC m=+1174.981503416" Jan 26 11:14:35 crc kubenswrapper[4619]: I0126 11:14:35.959445 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.074089 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.074129 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.074150 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndzsd\" (UniqueName: \"kubernetes.io/projected/9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153-kube-api-access-ndzsd\") pod \"glance-default-internal-api-0\" (UID: \"9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.074175 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.074198 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.074220 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.074240 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.074262 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153-logs\") pod \"glance-default-internal-api-0\" (UID: \"9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.176288 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.176331 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.176352 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndzsd\" (UniqueName: \"kubernetes.io/projected/9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153-kube-api-access-ndzsd\") pod \"glance-default-internal-api-0\" (UID: \"9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.176379 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.176404 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.176424 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.176445 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.176467 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153-logs\") pod \"glance-default-internal-api-0\" (UID: \"9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.176932 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153-logs\") pod \"glance-default-internal-api-0\" (UID: \"9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.177318 4619 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.177517 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.181823 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.181833 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.182771 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.193383 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.199455 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndzsd\" (UniqueName: \"kubernetes.io/projected/9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153-kube-api-access-ndzsd\") pod \"glance-default-internal-api-0\" (UID: \"9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.217996 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.325750 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.770397 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:14:36 crc kubenswrapper[4619]: W0126 11:14:36.773921 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f50a1e8_fb78_41d6_8ba3_4c1c9f66d153.slice/crio-d01c4c05ac2171f22c31f4153dbeb59a640450a6c78ce773df1d8cfb197f130c WatchSource:0}: Error finding container d01c4c05ac2171f22c31f4153dbeb59a640450a6c78ce773df1d8cfb197f130c: Status 404 returned error can't find the container with id d01c4c05ac2171f22c31f4153dbeb59a640450a6c78ce773df1d8cfb197f130c Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.799358 4619 generic.go:334] "Generic (PLEG): container finished" podID="cc6b9696-037c-42b6-ac54-56e3c83d9f99" containerID="27862d5cc233514a0d884f5f65207caa2484ef5c00e3e4ae278ad8b025e72610" exitCode=0 Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.799391 4619 generic.go:334] "Generic (PLEG): container finished" podID="cc6b9696-037c-42b6-ac54-56e3c83d9f99" containerID="d330680426fb88e2c264315e5ae6e505c7fb55426c760b7dad12ec1e9d6d291c" exitCode=2 Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.799402 4619 generic.go:334] "Generic (PLEG): container finished" podID="cc6b9696-037c-42b6-ac54-56e3c83d9f99" containerID="ccd004efee83a06531eea3cc9b0eac55daf85e7d1bbf045a97169d622acbb84c" exitCode=0 Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.799442 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc6b9696-037c-42b6-ac54-56e3c83d9f99","Type":"ContainerDied","Data":"27862d5cc233514a0d884f5f65207caa2484ef5c00e3e4ae278ad8b025e72610"} Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.799468 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc6b9696-037c-42b6-ac54-56e3c83d9f99","Type":"ContainerDied","Data":"d330680426fb88e2c264315e5ae6e505c7fb55426c760b7dad12ec1e9d6d291c"} Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.799482 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc6b9696-037c-42b6-ac54-56e3c83d9f99","Type":"ContainerDied","Data":"ccd004efee83a06531eea3cc9b0eac55daf85e7d1bbf045a97169d622acbb84c"} Jan 26 11:14:36 crc kubenswrapper[4619]: I0126 11:14:36.800653 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153","Type":"ContainerStarted","Data":"d01c4c05ac2171f22c31f4153dbeb59a640450a6c78ce773df1d8cfb197f130c"} Jan 26 11:14:37 crc kubenswrapper[4619]: I0126 11:14:37.272856 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1636ba60-de14-4281-aa49-417f7808ccd9" path="/var/lib/kubelet/pods/1636ba60-de14-4281-aa49-417f7808ccd9/volumes" Jan 26 11:14:37 crc kubenswrapper[4619]: I0126 11:14:37.810795 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153","Type":"ContainerStarted","Data":"c3c53986863c58653e0fac37383d6c16dfe86304bc676fd076386c7ca6dbe029"} Jan 26 11:14:38 crc kubenswrapper[4619]: I0126 11:14:38.033833 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-659c4b6587-4stqp" Jan 26 11:14:38 crc kubenswrapper[4619]: I0126 11:14:38.034711 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-659c4b6587-4stqp" Jan 26 11:14:38 crc kubenswrapper[4619]: I0126 11:14:38.822340 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153","Type":"ContainerStarted","Data":"d246d91c5b71730f47c89a41b8d31ff84a184cebc4fcc64e6d92efaba23d3d87"} Jan 26 11:14:38 crc kubenswrapper[4619]: I0126 11:14:38.861498 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.861478413 podStartE2EDuration="3.861478413s" podCreationTimestamp="2026-01-26 11:14:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:14:38.842688172 +0000 UTC m=+1177.876728898" watchObservedRunningTime="2026-01-26 11:14:38.861478413 +0000 UTC m=+1177.895519139" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.276448 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-9xp94"] Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.278596 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9xp94" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.298710 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-9xp94"] Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.380410 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3168c764-1c23-47f7-ad80-20fe2f860ffd-operator-scripts\") pod \"nova-api-db-create-9xp94\" (UID: \"3168c764-1c23-47f7-ad80-20fe2f860ffd\") " pod="openstack/nova-api-db-create-9xp94" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.380464 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vks6\" (UniqueName: \"kubernetes.io/projected/3168c764-1c23-47f7-ad80-20fe2f860ffd-kube-api-access-4vks6\") pod \"nova-api-db-create-9xp94\" (UID: \"3168c764-1c23-47f7-ad80-20fe2f860ffd\") " pod="openstack/nova-api-db-create-9xp94" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.407662 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-6d7e-account-create-update-xmjsb"] Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.424823 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6d7e-account-create-update-xmjsb" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.427698 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.447857 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-n9nnv"] Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.449018 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-n9nnv" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.463687 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6d7e-account-create-update-xmjsb"] Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.471940 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-n9nnv"] Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.481536 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb30d86a-e144-4072-821a-f159e5dbdf31-operator-scripts\") pod \"nova-api-6d7e-account-create-update-xmjsb\" (UID: \"cb30d86a-e144-4072-821a-f159e5dbdf31\") " pod="openstack/nova-api-6d7e-account-create-update-xmjsb" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.481633 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3168c764-1c23-47f7-ad80-20fe2f860ffd-operator-scripts\") pod \"nova-api-db-create-9xp94\" (UID: \"3168c764-1c23-47f7-ad80-20fe2f860ffd\") " pod="openstack/nova-api-db-create-9xp94" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.481662 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vks6\" (UniqueName: \"kubernetes.io/projected/3168c764-1c23-47f7-ad80-20fe2f860ffd-kube-api-access-4vks6\") pod \"nova-api-db-create-9xp94\" (UID: \"3168c764-1c23-47f7-ad80-20fe2f860ffd\") " pod="openstack/nova-api-db-create-9xp94" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.481683 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3d77756-36ba-479e-8688-779283522d80-operator-scripts\") pod \"nova-cell0-db-create-n9nnv\" (UID: \"b3d77756-36ba-479e-8688-779283522d80\") " pod="openstack/nova-cell0-db-create-n9nnv" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.481801 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zffk7\" (UniqueName: \"kubernetes.io/projected/b3d77756-36ba-479e-8688-779283522d80-kube-api-access-zffk7\") pod \"nova-cell0-db-create-n9nnv\" (UID: \"b3d77756-36ba-479e-8688-779283522d80\") " pod="openstack/nova-cell0-db-create-n9nnv" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.481821 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55wdv\" (UniqueName: \"kubernetes.io/projected/cb30d86a-e144-4072-821a-f159e5dbdf31-kube-api-access-55wdv\") pod \"nova-api-6d7e-account-create-update-xmjsb\" (UID: \"cb30d86a-e144-4072-821a-f159e5dbdf31\") " pod="openstack/nova-api-6d7e-account-create-update-xmjsb" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.482366 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3168c764-1c23-47f7-ad80-20fe2f860ffd-operator-scripts\") pod \"nova-api-db-create-9xp94\" (UID: \"3168c764-1c23-47f7-ad80-20fe2f860ffd\") " pod="openstack/nova-api-db-create-9xp94" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.514867 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vks6\" (UniqueName: \"kubernetes.io/projected/3168c764-1c23-47f7-ad80-20fe2f860ffd-kube-api-access-4vks6\") pod \"nova-api-db-create-9xp94\" (UID: \"3168c764-1c23-47f7-ad80-20fe2f860ffd\") " pod="openstack/nova-api-db-create-9xp94" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.573901 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-g8k9m"] Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.575117 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-g8k9m" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.583221 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zffk7\" (UniqueName: \"kubernetes.io/projected/b3d77756-36ba-479e-8688-779283522d80-kube-api-access-zffk7\") pod \"nova-cell0-db-create-n9nnv\" (UID: \"b3d77756-36ba-479e-8688-779283522d80\") " pod="openstack/nova-cell0-db-create-n9nnv" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.583277 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55wdv\" (UniqueName: \"kubernetes.io/projected/cb30d86a-e144-4072-821a-f159e5dbdf31-kube-api-access-55wdv\") pod \"nova-api-6d7e-account-create-update-xmjsb\" (UID: \"cb30d86a-e144-4072-821a-f159e5dbdf31\") " pod="openstack/nova-api-6d7e-account-create-update-xmjsb" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.583312 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb30d86a-e144-4072-821a-f159e5dbdf31-operator-scripts\") pod \"nova-api-6d7e-account-create-update-xmjsb\" (UID: \"cb30d86a-e144-4072-821a-f159e5dbdf31\") " pod="openstack/nova-api-6d7e-account-create-update-xmjsb" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.583375 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3d77756-36ba-479e-8688-779283522d80-operator-scripts\") pod \"nova-cell0-db-create-n9nnv\" (UID: \"b3d77756-36ba-479e-8688-779283522d80\") " pod="openstack/nova-cell0-db-create-n9nnv" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.584375 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3d77756-36ba-479e-8688-779283522d80-operator-scripts\") pod \"nova-cell0-db-create-n9nnv\" (UID: \"b3d77756-36ba-479e-8688-779283522d80\") " pod="openstack/nova-cell0-db-create-n9nnv" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.585401 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb30d86a-e144-4072-821a-f159e5dbdf31-operator-scripts\") pod \"nova-api-6d7e-account-create-update-xmjsb\" (UID: \"cb30d86a-e144-4072-821a-f159e5dbdf31\") " pod="openstack/nova-api-6d7e-account-create-update-xmjsb" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.586757 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-9c25-account-create-update-sgdh4"] Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.587836 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9c25-account-create-update-sgdh4" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.589407 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.598448 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-g8k9m"] Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.606663 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zffk7\" (UniqueName: \"kubernetes.io/projected/b3d77756-36ba-479e-8688-779283522d80-kube-api-access-zffk7\") pod \"nova-cell0-db-create-n9nnv\" (UID: \"b3d77756-36ba-479e-8688-779283522d80\") " pod="openstack/nova-cell0-db-create-n9nnv" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.606728 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55wdv\" (UniqueName: \"kubernetes.io/projected/cb30d86a-e144-4072-821a-f159e5dbdf31-kube-api-access-55wdv\") pod \"nova-api-6d7e-account-create-update-xmjsb\" (UID: \"cb30d86a-e144-4072-821a-f159e5dbdf31\") " pod="openstack/nova-api-6d7e-account-create-update-xmjsb" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.607148 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9c25-account-create-update-sgdh4"] Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.611966 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9xp94" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.684956 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tggqj\" (UniqueName: \"kubernetes.io/projected/21d35b94-5e1e-4fd2-a2d7-40ca92101a54-kube-api-access-tggqj\") pod \"nova-cell1-db-create-g8k9m\" (UID: \"21d35b94-5e1e-4fd2-a2d7-40ca92101a54\") " pod="openstack/nova-cell1-db-create-g8k9m" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.685121 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21d35b94-5e1e-4fd2-a2d7-40ca92101a54-operator-scripts\") pod \"nova-cell1-db-create-g8k9m\" (UID: \"21d35b94-5e1e-4fd2-a2d7-40ca92101a54\") " pod="openstack/nova-cell1-db-create-g8k9m" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.685144 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffa8eff3-988a-4fe5-93b9-371636a0ae8f-operator-scripts\") pod \"nova-cell0-9c25-account-create-update-sgdh4\" (UID: \"ffa8eff3-988a-4fe5-93b9-371636a0ae8f\") " pod="openstack/nova-cell0-9c25-account-create-update-sgdh4" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.685159 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prj89\" (UniqueName: \"kubernetes.io/projected/ffa8eff3-988a-4fe5-93b9-371636a0ae8f-kube-api-access-prj89\") pod \"nova-cell0-9c25-account-create-update-sgdh4\" (UID: \"ffa8eff3-988a-4fe5-93b9-371636a0ae8f\") " pod="openstack/nova-cell0-9c25-account-create-update-sgdh4" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.748310 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6d7e-account-create-update-xmjsb" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.767000 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-n9nnv" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.786713 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21d35b94-5e1e-4fd2-a2d7-40ca92101a54-operator-scripts\") pod \"nova-cell1-db-create-g8k9m\" (UID: \"21d35b94-5e1e-4fd2-a2d7-40ca92101a54\") " pod="openstack/nova-cell1-db-create-g8k9m" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.786757 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffa8eff3-988a-4fe5-93b9-371636a0ae8f-operator-scripts\") pod \"nova-cell0-9c25-account-create-update-sgdh4\" (UID: \"ffa8eff3-988a-4fe5-93b9-371636a0ae8f\") " pod="openstack/nova-cell0-9c25-account-create-update-sgdh4" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.786785 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prj89\" (UniqueName: \"kubernetes.io/projected/ffa8eff3-988a-4fe5-93b9-371636a0ae8f-kube-api-access-prj89\") pod \"nova-cell0-9c25-account-create-update-sgdh4\" (UID: \"ffa8eff3-988a-4fe5-93b9-371636a0ae8f\") " pod="openstack/nova-cell0-9c25-account-create-update-sgdh4" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.786811 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tggqj\" (UniqueName: \"kubernetes.io/projected/21d35b94-5e1e-4fd2-a2d7-40ca92101a54-kube-api-access-tggqj\") pod \"nova-cell1-db-create-g8k9m\" (UID: \"21d35b94-5e1e-4fd2-a2d7-40ca92101a54\") " pod="openstack/nova-cell1-db-create-g8k9m" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.787997 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffa8eff3-988a-4fe5-93b9-371636a0ae8f-operator-scripts\") pod \"nova-cell0-9c25-account-create-update-sgdh4\" (UID: \"ffa8eff3-988a-4fe5-93b9-371636a0ae8f\") " pod="openstack/nova-cell0-9c25-account-create-update-sgdh4" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.788190 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21d35b94-5e1e-4fd2-a2d7-40ca92101a54-operator-scripts\") pod \"nova-cell1-db-create-g8k9m\" (UID: \"21d35b94-5e1e-4fd2-a2d7-40ca92101a54\") " pod="openstack/nova-cell1-db-create-g8k9m" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.790868 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-6717-account-create-update-h2wzj"] Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.792021 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6717-account-create-update-h2wzj" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.806277 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6717-account-create-update-h2wzj"] Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.806869 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.808432 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prj89\" (UniqueName: \"kubernetes.io/projected/ffa8eff3-988a-4fe5-93b9-371636a0ae8f-kube-api-access-prj89\") pod \"nova-cell0-9c25-account-create-update-sgdh4\" (UID: \"ffa8eff3-988a-4fe5-93b9-371636a0ae8f\") " pod="openstack/nova-cell0-9c25-account-create-update-sgdh4" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.809828 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tggqj\" (UniqueName: \"kubernetes.io/projected/21d35b94-5e1e-4fd2-a2d7-40ca92101a54-kube-api-access-tggqj\") pod \"nova-cell1-db-create-g8k9m\" (UID: \"21d35b94-5e1e-4fd2-a2d7-40ca92101a54\") " pod="openstack/nova-cell1-db-create-g8k9m" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.888211 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cqmc\" (UniqueName: \"kubernetes.io/projected/b1bb0260-95f5-41fd-b051-0f122151a9c0-kube-api-access-8cqmc\") pod \"nova-cell1-6717-account-create-update-h2wzj\" (UID: \"b1bb0260-95f5-41fd-b051-0f122151a9c0\") " pod="openstack/nova-cell1-6717-account-create-update-h2wzj" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.888648 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1bb0260-95f5-41fd-b051-0f122151a9c0-operator-scripts\") pod \"nova-cell1-6717-account-create-update-h2wzj\" (UID: \"b1bb0260-95f5-41fd-b051-0f122151a9c0\") " pod="openstack/nova-cell1-6717-account-create-update-h2wzj" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.898988 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-g8k9m" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.991610 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1bb0260-95f5-41fd-b051-0f122151a9c0-operator-scripts\") pod \"nova-cell1-6717-account-create-update-h2wzj\" (UID: \"b1bb0260-95f5-41fd-b051-0f122151a9c0\") " pod="openstack/nova-cell1-6717-account-create-update-h2wzj" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.991747 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cqmc\" (UniqueName: \"kubernetes.io/projected/b1bb0260-95f5-41fd-b051-0f122151a9c0-kube-api-access-8cqmc\") pod \"nova-cell1-6717-account-create-update-h2wzj\" (UID: \"b1bb0260-95f5-41fd-b051-0f122151a9c0\") " pod="openstack/nova-cell1-6717-account-create-update-h2wzj" Jan 26 11:14:40 crc kubenswrapper[4619]: I0126 11:14:40.993174 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1bb0260-95f5-41fd-b051-0f122151a9c0-operator-scripts\") pod \"nova-cell1-6717-account-create-update-h2wzj\" (UID: \"b1bb0260-95f5-41fd-b051-0f122151a9c0\") " pod="openstack/nova-cell1-6717-account-create-update-h2wzj" Jan 26 11:14:41 crc kubenswrapper[4619]: I0126 11:14:41.020903 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cqmc\" (UniqueName: \"kubernetes.io/projected/b1bb0260-95f5-41fd-b051-0f122151a9c0-kube-api-access-8cqmc\") pod \"nova-cell1-6717-account-create-update-h2wzj\" (UID: \"b1bb0260-95f5-41fd-b051-0f122151a9c0\") " pod="openstack/nova-cell1-6717-account-create-update-h2wzj" Jan 26 11:14:41 crc kubenswrapper[4619]: I0126 11:14:41.051110 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9c25-account-create-update-sgdh4" Jan 26 11:14:41 crc kubenswrapper[4619]: I0126 11:14:41.144641 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-9xp94"] Jan 26 11:14:41 crc kubenswrapper[4619]: I0126 11:14:41.169136 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6717-account-create-update-h2wzj" Jan 26 11:14:41 crc kubenswrapper[4619]: I0126 11:14:41.431232 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-n9nnv"] Jan 26 11:14:41 crc kubenswrapper[4619]: W0126 11:14:41.447929 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3d77756_36ba_479e_8688_779283522d80.slice/crio-3a49fad480b06194490b994acc32a024b1496ed6af497acb57dd9fcd75e1e691 WatchSource:0}: Error finding container 3a49fad480b06194490b994acc32a024b1496ed6af497acb57dd9fcd75e1e691: Status 404 returned error can't find the container with id 3a49fad480b06194490b994acc32a024b1496ed6af497acb57dd9fcd75e1e691 Jan 26 11:14:41 crc kubenswrapper[4619]: I0126 11:14:41.598870 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6d7e-account-create-update-xmjsb"] Jan 26 11:14:41 crc kubenswrapper[4619]: I0126 11:14:41.663867 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-g8k9m"] Jan 26 11:14:41 crc kubenswrapper[4619]: I0126 11:14:41.715451 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9c25-account-create-update-sgdh4"] Jan 26 11:14:41 crc kubenswrapper[4619]: I0126 11:14:41.831210 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6717-account-create-update-h2wzj"] Jan 26 11:14:41 crc kubenswrapper[4619]: W0126 11:14:41.845357 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1bb0260_95f5_41fd_b051_0f122151a9c0.slice/crio-96680228bf02ebdd907416ad1c63ee86f2aadba2285b5f509add8631dd40e190 WatchSource:0}: Error finding container 96680228bf02ebdd907416ad1c63ee86f2aadba2285b5f509add8631dd40e190: Status 404 returned error can't find the container with id 96680228bf02ebdd907416ad1c63ee86f2aadba2285b5f509add8631dd40e190 Jan 26 11:14:41 crc kubenswrapper[4619]: I0126 11:14:41.894466 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-g8k9m" event={"ID":"21d35b94-5e1e-4fd2-a2d7-40ca92101a54","Type":"ContainerStarted","Data":"b457fec3ebf135250c9f47bdaecd31b392cc2cbf2e95d8de727713611ddfb381"} Jan 26 11:14:41 crc kubenswrapper[4619]: I0126 11:14:41.897589 4619 generic.go:334] "Generic (PLEG): container finished" podID="3168c764-1c23-47f7-ad80-20fe2f860ffd" containerID="82340cc71d3000dc912ce9d5066620b2a8b74bb08f9f99e329572e593f9a9b8b" exitCode=0 Jan 26 11:14:41 crc kubenswrapper[4619]: I0126 11:14:41.897994 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9xp94" event={"ID":"3168c764-1c23-47f7-ad80-20fe2f860ffd","Type":"ContainerDied","Data":"82340cc71d3000dc912ce9d5066620b2a8b74bb08f9f99e329572e593f9a9b8b"} Jan 26 11:14:41 crc kubenswrapper[4619]: I0126 11:14:41.898024 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9xp94" event={"ID":"3168c764-1c23-47f7-ad80-20fe2f860ffd","Type":"ContainerStarted","Data":"030bd8423dd83be465d8dd6e5e705d7cb5cd45a7a55f5c414139d203983d5fd1"} Jan 26 11:14:41 crc kubenswrapper[4619]: I0126 11:14:41.900977 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6d7e-account-create-update-xmjsb" event={"ID":"cb30d86a-e144-4072-821a-f159e5dbdf31","Type":"ContainerStarted","Data":"3a6c8c5abcd5a88424c9c2d035cd9de9e5e8f0d3beddc27b4fc7f664ad38fed9"} Jan 26 11:14:41 crc kubenswrapper[4619]: I0126 11:14:41.909903 4619 generic.go:334] "Generic (PLEG): container finished" podID="cc6b9696-037c-42b6-ac54-56e3c83d9f99" containerID="9ec12891e1dc7ccff340f3fbfea21873c1d5b31aee0c3022ec1f299b3c5c5228" exitCode=0 Jan 26 11:14:41 crc kubenswrapper[4619]: I0126 11:14:41.909982 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc6b9696-037c-42b6-ac54-56e3c83d9f99","Type":"ContainerDied","Data":"9ec12891e1dc7ccff340f3fbfea21873c1d5b31aee0c3022ec1f299b3c5c5228"} Jan 26 11:14:41 crc kubenswrapper[4619]: I0126 11:14:41.913721 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-n9nnv" event={"ID":"b3d77756-36ba-479e-8688-779283522d80","Type":"ContainerStarted","Data":"3a49fad480b06194490b994acc32a024b1496ed6af497acb57dd9fcd75e1e691"} Jan 26 11:14:41 crc kubenswrapper[4619]: I0126 11:14:41.914885 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6717-account-create-update-h2wzj" event={"ID":"b1bb0260-95f5-41fd-b051-0f122151a9c0","Type":"ContainerStarted","Data":"96680228bf02ebdd907416ad1c63ee86f2aadba2285b5f509add8631dd40e190"} Jan 26 11:14:41 crc kubenswrapper[4619]: I0126 11:14:41.930134 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9c25-account-create-update-sgdh4" event={"ID":"ffa8eff3-988a-4fe5-93b9-371636a0ae8f","Type":"ContainerStarted","Data":"979476f1d1a73a926dfe18573dea3239254d61ae0abb82c2d984e6487cfe702c"} Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.112279 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.225800 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc6b9696-037c-42b6-ac54-56e3c83d9f99-log-httpd\") pod \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\" (UID: \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\") " Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.225936 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc6b9696-037c-42b6-ac54-56e3c83d9f99-combined-ca-bundle\") pod \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\" (UID: \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\") " Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.226043 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7l9t\" (UniqueName: \"kubernetes.io/projected/cc6b9696-037c-42b6-ac54-56e3c83d9f99-kube-api-access-t7l9t\") pod \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\" (UID: \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\") " Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.226091 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc6b9696-037c-42b6-ac54-56e3c83d9f99-sg-core-conf-yaml\") pod \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\" (UID: \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\") " Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.226164 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc6b9696-037c-42b6-ac54-56e3c83d9f99-scripts\") pod \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\" (UID: \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\") " Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.226234 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc6b9696-037c-42b6-ac54-56e3c83d9f99-run-httpd\") pod \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\" (UID: \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\") " Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.226259 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc6b9696-037c-42b6-ac54-56e3c83d9f99-config-data\") pod \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\" (UID: \"cc6b9696-037c-42b6-ac54-56e3c83d9f99\") " Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.227931 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc6b9696-037c-42b6-ac54-56e3c83d9f99-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cc6b9696-037c-42b6-ac54-56e3c83d9f99" (UID: "cc6b9696-037c-42b6-ac54-56e3c83d9f99"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.231886 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc6b9696-037c-42b6-ac54-56e3c83d9f99-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cc6b9696-037c-42b6-ac54-56e3c83d9f99" (UID: "cc6b9696-037c-42b6-ac54-56e3c83d9f99"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.256104 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc6b9696-037c-42b6-ac54-56e3c83d9f99-scripts" (OuterVolumeSpecName: "scripts") pod "cc6b9696-037c-42b6-ac54-56e3c83d9f99" (UID: "cc6b9696-037c-42b6-ac54-56e3c83d9f99"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.260253 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc6b9696-037c-42b6-ac54-56e3c83d9f99-kube-api-access-t7l9t" (OuterVolumeSpecName: "kube-api-access-t7l9t") pod "cc6b9696-037c-42b6-ac54-56e3c83d9f99" (UID: "cc6b9696-037c-42b6-ac54-56e3c83d9f99"). InnerVolumeSpecName "kube-api-access-t7l9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.293180 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc6b9696-037c-42b6-ac54-56e3c83d9f99-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cc6b9696-037c-42b6-ac54-56e3c83d9f99" (UID: "cc6b9696-037c-42b6-ac54-56e3c83d9f99"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.330711 4619 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc6b9696-037c-42b6-ac54-56e3c83d9f99-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.330947 4619 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc6b9696-037c-42b6-ac54-56e3c83d9f99-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.331035 4619 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc6b9696-037c-42b6-ac54-56e3c83d9f99-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.331164 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7l9t\" (UniqueName: \"kubernetes.io/projected/cc6b9696-037c-42b6-ac54-56e3c83d9f99-kube-api-access-t7l9t\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.331243 4619 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc6b9696-037c-42b6-ac54-56e3c83d9f99-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.366853 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc6b9696-037c-42b6-ac54-56e3c83d9f99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc6b9696-037c-42b6-ac54-56e3c83d9f99" (UID: "cc6b9696-037c-42b6-ac54-56e3c83d9f99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.376061 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc6b9696-037c-42b6-ac54-56e3c83d9f99-config-data" (OuterVolumeSpecName: "config-data") pod "cc6b9696-037c-42b6-ac54-56e3c83d9f99" (UID: "cc6b9696-037c-42b6-ac54-56e3c83d9f99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.433024 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc6b9696-037c-42b6-ac54-56e3c83d9f99-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.433061 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc6b9696-037c-42b6-ac54-56e3c83d9f99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.939469 4619 generic.go:334] "Generic (PLEG): container finished" podID="21d35b94-5e1e-4fd2-a2d7-40ca92101a54" containerID="5c9246b5938c0c3e73b72f0031d0db1c3c89ba9889e682cc21f04c103124fc72" exitCode=0 Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.939522 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-g8k9m" event={"ID":"21d35b94-5e1e-4fd2-a2d7-40ca92101a54","Type":"ContainerDied","Data":"5c9246b5938c0c3e73b72f0031d0db1c3c89ba9889e682cc21f04c103124fc72"} Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.941301 4619 generic.go:334] "Generic (PLEG): container finished" podID="cb30d86a-e144-4072-821a-f159e5dbdf31" containerID="2b489812844dcb7e10e1adf0b8b04a1b53fdd8833b52af2a74b9b6f1392c8dd9" exitCode=0 Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.941348 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6d7e-account-create-update-xmjsb" event={"ID":"cb30d86a-e144-4072-821a-f159e5dbdf31","Type":"ContainerDied","Data":"2b489812844dcb7e10e1adf0b8b04a1b53fdd8833b52af2a74b9b6f1392c8dd9"} Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.944685 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc6b9696-037c-42b6-ac54-56e3c83d9f99","Type":"ContainerDied","Data":"728f9bbad4e48aec39a8d431818e37e5d7c0181238b94a5043a479aa7a9fab71"} Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.944716 4619 scope.go:117] "RemoveContainer" containerID="27862d5cc233514a0d884f5f65207caa2484ef5c00e3e4ae278ad8b025e72610" Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.944838 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.948315 4619 generic.go:334] "Generic (PLEG): container finished" podID="b1bb0260-95f5-41fd-b051-0f122151a9c0" containerID="42881539eefcbd3d0935cefff96205634895b18de1c1066bc0b66d0e8e8223c1" exitCode=0 Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.948488 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6717-account-create-update-h2wzj" event={"ID":"b1bb0260-95f5-41fd-b051-0f122151a9c0","Type":"ContainerDied","Data":"42881539eefcbd3d0935cefff96205634895b18de1c1066bc0b66d0e8e8223c1"} Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.951578 4619 generic.go:334] "Generic (PLEG): container finished" podID="b3d77756-36ba-479e-8688-779283522d80" containerID="fe9298eb491e1aac696e8b00278e5b70a29d4bb586f884898f817f7ecea3085d" exitCode=0 Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.951713 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-n9nnv" event={"ID":"b3d77756-36ba-479e-8688-779283522d80","Type":"ContainerDied","Data":"fe9298eb491e1aac696e8b00278e5b70a29d4bb586f884898f817f7ecea3085d"} Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.959733 4619 generic.go:334] "Generic (PLEG): container finished" podID="ffa8eff3-988a-4fe5-93b9-371636a0ae8f" containerID="15b133ebb79bdccdd304699b3317aeb6f1d27be7ab886144cafb6b655d9c65eb" exitCode=0 Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.960259 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9c25-account-create-update-sgdh4" event={"ID":"ffa8eff3-988a-4fe5-93b9-371636a0ae8f","Type":"ContainerDied","Data":"15b133ebb79bdccdd304699b3317aeb6f1d27be7ab886144cafb6b655d9c65eb"} Jan 26 11:14:42 crc kubenswrapper[4619]: I0126 11:14:42.980404 4619 scope.go:117] "RemoveContainer" containerID="d330680426fb88e2c264315e5ae6e505c7fb55426c760b7dad12ec1e9d6d291c" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.055772 4619 scope.go:117] "RemoveContainer" containerID="ccd004efee83a06531eea3cc9b0eac55daf85e7d1bbf045a97169d622acbb84c" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.064957 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.077072 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.096699 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:14:43 crc kubenswrapper[4619]: E0126 11:14:43.097161 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc6b9696-037c-42b6-ac54-56e3c83d9f99" containerName="proxy-httpd" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.097175 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc6b9696-037c-42b6-ac54-56e3c83d9f99" containerName="proxy-httpd" Jan 26 11:14:43 crc kubenswrapper[4619]: E0126 11:14:43.097192 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc6b9696-037c-42b6-ac54-56e3c83d9f99" containerName="ceilometer-notification-agent" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.097198 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc6b9696-037c-42b6-ac54-56e3c83d9f99" containerName="ceilometer-notification-agent" Jan 26 11:14:43 crc kubenswrapper[4619]: E0126 11:14:43.097221 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc6b9696-037c-42b6-ac54-56e3c83d9f99" containerName="sg-core" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.097228 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc6b9696-037c-42b6-ac54-56e3c83d9f99" containerName="sg-core" Jan 26 11:14:43 crc kubenswrapper[4619]: E0126 11:14:43.097243 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc6b9696-037c-42b6-ac54-56e3c83d9f99" containerName="ceilometer-central-agent" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.097249 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc6b9696-037c-42b6-ac54-56e3c83d9f99" containerName="ceilometer-central-agent" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.097440 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc6b9696-037c-42b6-ac54-56e3c83d9f99" containerName="sg-core" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.097458 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc6b9696-037c-42b6-ac54-56e3c83d9f99" containerName="ceilometer-notification-agent" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.097481 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc6b9696-037c-42b6-ac54-56e3c83d9f99" containerName="ceilometer-central-agent" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.097491 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc6b9696-037c-42b6-ac54-56e3c83d9f99" containerName="proxy-httpd" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.099336 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.103165 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.103238 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.110881 4619 scope.go:117] "RemoveContainer" containerID="9ec12891e1dc7ccff340f3fbfea21873c1d5b31aee0c3022ec1f299b3c5c5228" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.111836 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.153312 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15c51dad-c61f-4c0d-91e3-6d0054c521c4-scripts\") pod \"ceilometer-0\" (UID: \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\") " pod="openstack/ceilometer-0" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.153363 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15c51dad-c61f-4c0d-91e3-6d0054c521c4-run-httpd\") pod \"ceilometer-0\" (UID: \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\") " pod="openstack/ceilometer-0" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.153398 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15c51dad-c61f-4c0d-91e3-6d0054c521c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\") " pod="openstack/ceilometer-0" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.153528 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15c51dad-c61f-4c0d-91e3-6d0054c521c4-config-data\") pod \"ceilometer-0\" (UID: \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\") " pod="openstack/ceilometer-0" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.153673 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15c51dad-c61f-4c0d-91e3-6d0054c521c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\") " pod="openstack/ceilometer-0" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.153720 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nh9n\" (UniqueName: \"kubernetes.io/projected/15c51dad-c61f-4c0d-91e3-6d0054c521c4-kube-api-access-6nh9n\") pod \"ceilometer-0\" (UID: \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\") " pod="openstack/ceilometer-0" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.153755 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15c51dad-c61f-4c0d-91e3-6d0054c521c4-log-httpd\") pod \"ceilometer-0\" (UID: \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\") " pod="openstack/ceilometer-0" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.255146 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15c51dad-c61f-4c0d-91e3-6d0054c521c4-scripts\") pod \"ceilometer-0\" (UID: \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\") " pod="openstack/ceilometer-0" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.255898 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15c51dad-c61f-4c0d-91e3-6d0054c521c4-run-httpd\") pod \"ceilometer-0\" (UID: \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\") " pod="openstack/ceilometer-0" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.255932 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15c51dad-c61f-4c0d-91e3-6d0054c521c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\") " pod="openstack/ceilometer-0" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.255967 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15c51dad-c61f-4c0d-91e3-6d0054c521c4-config-data\") pod \"ceilometer-0\" (UID: \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\") " pod="openstack/ceilometer-0" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.255995 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15c51dad-c61f-4c0d-91e3-6d0054c521c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\") " pod="openstack/ceilometer-0" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.256016 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nh9n\" (UniqueName: \"kubernetes.io/projected/15c51dad-c61f-4c0d-91e3-6d0054c521c4-kube-api-access-6nh9n\") pod \"ceilometer-0\" (UID: \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\") " pod="openstack/ceilometer-0" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.256035 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15c51dad-c61f-4c0d-91e3-6d0054c521c4-log-httpd\") pod \"ceilometer-0\" (UID: \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\") " pod="openstack/ceilometer-0" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.256392 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15c51dad-c61f-4c0d-91e3-6d0054c521c4-log-httpd\") pod \"ceilometer-0\" (UID: \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\") " pod="openstack/ceilometer-0" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.256608 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15c51dad-c61f-4c0d-91e3-6d0054c521c4-run-httpd\") pod \"ceilometer-0\" (UID: \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\") " pod="openstack/ceilometer-0" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.267633 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15c51dad-c61f-4c0d-91e3-6d0054c521c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\") " pod="openstack/ceilometer-0" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.276651 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15c51dad-c61f-4c0d-91e3-6d0054c521c4-config-data\") pod \"ceilometer-0\" (UID: \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\") " pod="openstack/ceilometer-0" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.285280 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nh9n\" (UniqueName: \"kubernetes.io/projected/15c51dad-c61f-4c0d-91e3-6d0054c521c4-kube-api-access-6nh9n\") pod \"ceilometer-0\" (UID: \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\") " pod="openstack/ceilometer-0" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.295428 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15c51dad-c61f-4c0d-91e3-6d0054c521c4-scripts\") pod \"ceilometer-0\" (UID: \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\") " pod="openstack/ceilometer-0" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.295451 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15c51dad-c61f-4c0d-91e3-6d0054c521c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\") " pod="openstack/ceilometer-0" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.300418 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc6b9696-037c-42b6-ac54-56e3c83d9f99" path="/var/lib/kubelet/pods/cc6b9696-037c-42b6-ac54-56e3c83d9f99/volumes" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.403879 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9xp94" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.425019 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.459210 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vks6\" (UniqueName: \"kubernetes.io/projected/3168c764-1c23-47f7-ad80-20fe2f860ffd-kube-api-access-4vks6\") pod \"3168c764-1c23-47f7-ad80-20fe2f860ffd\" (UID: \"3168c764-1c23-47f7-ad80-20fe2f860ffd\") " Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.459477 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3168c764-1c23-47f7-ad80-20fe2f860ffd-operator-scripts\") pod \"3168c764-1c23-47f7-ad80-20fe2f860ffd\" (UID: \"3168c764-1c23-47f7-ad80-20fe2f860ffd\") " Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.459955 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3168c764-1c23-47f7-ad80-20fe2f860ffd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3168c764-1c23-47f7-ad80-20fe2f860ffd" (UID: "3168c764-1c23-47f7-ad80-20fe2f860ffd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.469854 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3168c764-1c23-47f7-ad80-20fe2f860ffd-kube-api-access-4vks6" (OuterVolumeSpecName: "kube-api-access-4vks6") pod "3168c764-1c23-47f7-ad80-20fe2f860ffd" (UID: "3168c764-1c23-47f7-ad80-20fe2f860ffd"). InnerVolumeSpecName "kube-api-access-4vks6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.526198 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.526452 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.561526 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vks6\" (UniqueName: \"kubernetes.io/projected/3168c764-1c23-47f7-ad80-20fe2f860ffd-kube-api-access-4vks6\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.561670 4619 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3168c764-1c23-47f7-ad80-20fe2f860ffd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.577293 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 11:14:43 crc kubenswrapper[4619]: I0126 11:14:43.628639 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 11:14:44 crc kubenswrapper[4619]: I0126 11:14:43.944809 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:14:44 crc kubenswrapper[4619]: I0126 11:14:43.996193 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9xp94" Jan 26 11:14:44 crc kubenswrapper[4619]: I0126 11:14:43.997691 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9xp94" event={"ID":"3168c764-1c23-47f7-ad80-20fe2f860ffd","Type":"ContainerDied","Data":"030bd8423dd83be465d8dd6e5e705d7cb5cd45a7a55f5c414139d203983d5fd1"} Jan 26 11:14:44 crc kubenswrapper[4619]: I0126 11:14:43.997730 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="030bd8423dd83be465d8dd6e5e705d7cb5cd45a7a55f5c414139d203983d5fd1" Jan 26 11:14:44 crc kubenswrapper[4619]: I0126 11:14:44.013726 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15c51dad-c61f-4c0d-91e3-6d0054c521c4","Type":"ContainerStarted","Data":"a84d8b594dfb456491faf1a233f5638d89133da5e21d8033e5847eed7dc62b78"} Jan 26 11:14:44 crc kubenswrapper[4619]: I0126 11:14:44.015257 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 11:14:44 crc kubenswrapper[4619]: I0126 11:14:44.015296 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 11:14:44 crc kubenswrapper[4619]: I0126 11:14:44.234800 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:14:44 crc kubenswrapper[4619]: I0126 11:14:44.234949 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:14:44 crc kubenswrapper[4619]: I0126 11:14:44.235060 4619 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" Jan 26 11:14:44 crc kubenswrapper[4619]: I0126 11:14:44.236069 4619 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"acb7965272930c0e5aeb32299fd66f4070cac2661e0eb68cc61aedd3e0ea08f9"} pod="openshift-machine-config-operator/machine-config-daemon-28hd4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 11:14:44 crc kubenswrapper[4619]: I0126 11:14:44.236137 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" containerID="cri-o://acb7965272930c0e5aeb32299fd66f4070cac2661e0eb68cc61aedd3e0ea08f9" gracePeriod=600 Jan 26 11:14:44 crc kubenswrapper[4619]: I0126 11:14:44.576142 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-g8k9m" Jan 26 11:14:44 crc kubenswrapper[4619]: I0126 11:14:44.717227 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tggqj\" (UniqueName: \"kubernetes.io/projected/21d35b94-5e1e-4fd2-a2d7-40ca92101a54-kube-api-access-tggqj\") pod \"21d35b94-5e1e-4fd2-a2d7-40ca92101a54\" (UID: \"21d35b94-5e1e-4fd2-a2d7-40ca92101a54\") " Jan 26 11:14:44 crc kubenswrapper[4619]: I0126 11:14:44.717443 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21d35b94-5e1e-4fd2-a2d7-40ca92101a54-operator-scripts\") pod \"21d35b94-5e1e-4fd2-a2d7-40ca92101a54\" (UID: \"21d35b94-5e1e-4fd2-a2d7-40ca92101a54\") " Jan 26 11:14:44 crc kubenswrapper[4619]: I0126 11:14:44.718574 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21d35b94-5e1e-4fd2-a2d7-40ca92101a54-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "21d35b94-5e1e-4fd2-a2d7-40ca92101a54" (UID: "21d35b94-5e1e-4fd2-a2d7-40ca92101a54"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:14:44 crc kubenswrapper[4619]: I0126 11:14:44.726017 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21d35b94-5e1e-4fd2-a2d7-40ca92101a54-kube-api-access-tggqj" (OuterVolumeSpecName: "kube-api-access-tggqj") pod "21d35b94-5e1e-4fd2-a2d7-40ca92101a54" (UID: "21d35b94-5e1e-4fd2-a2d7-40ca92101a54"). InnerVolumeSpecName "kube-api-access-tggqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:14:44 crc kubenswrapper[4619]: I0126 11:14:44.805387 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9c25-account-create-update-sgdh4" Jan 26 11:14:44 crc kubenswrapper[4619]: I0126 11:14:44.817839 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-n9nnv" Jan 26 11:14:44 crc kubenswrapper[4619]: I0126 11:14:44.823765 4619 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21d35b94-5e1e-4fd2-a2d7-40ca92101a54-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:44 crc kubenswrapper[4619]: I0126 11:14:44.823795 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tggqj\" (UniqueName: \"kubernetes.io/projected/21d35b94-5e1e-4fd2-a2d7-40ca92101a54-kube-api-access-tggqj\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:44 crc kubenswrapper[4619]: I0126 11:14:44.842729 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6d7e-account-create-update-xmjsb" Jan 26 11:14:44 crc kubenswrapper[4619]: I0126 11:14:44.884784 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6717-account-create-update-h2wzj" Jan 26 11:14:44 crc kubenswrapper[4619]: I0126 11:14:44.924404 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prj89\" (UniqueName: \"kubernetes.io/projected/ffa8eff3-988a-4fe5-93b9-371636a0ae8f-kube-api-access-prj89\") pod \"ffa8eff3-988a-4fe5-93b9-371636a0ae8f\" (UID: \"ffa8eff3-988a-4fe5-93b9-371636a0ae8f\") " Jan 26 11:14:44 crc kubenswrapper[4619]: I0126 11:14:44.924488 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3d77756-36ba-479e-8688-779283522d80-operator-scripts\") pod \"b3d77756-36ba-479e-8688-779283522d80\" (UID: \"b3d77756-36ba-479e-8688-779283522d80\") " Jan 26 11:14:44 crc kubenswrapper[4619]: I0126 11:14:44.924527 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zffk7\" (UniqueName: \"kubernetes.io/projected/b3d77756-36ba-479e-8688-779283522d80-kube-api-access-zffk7\") pod \"b3d77756-36ba-479e-8688-779283522d80\" (UID: \"b3d77756-36ba-479e-8688-779283522d80\") " Jan 26 11:14:44 crc kubenswrapper[4619]: I0126 11:14:44.924660 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffa8eff3-988a-4fe5-93b9-371636a0ae8f-operator-scripts\") pod \"ffa8eff3-988a-4fe5-93b9-371636a0ae8f\" (UID: \"ffa8eff3-988a-4fe5-93b9-371636a0ae8f\") " Jan 26 11:14:44 crc kubenswrapper[4619]: I0126 11:14:44.925581 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3d77756-36ba-479e-8688-779283522d80-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b3d77756-36ba-479e-8688-779283522d80" (UID: "b3d77756-36ba-479e-8688-779283522d80"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:14:44 crc kubenswrapper[4619]: I0126 11:14:44.927379 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffa8eff3-988a-4fe5-93b9-371636a0ae8f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ffa8eff3-988a-4fe5-93b9-371636a0ae8f" (UID: "ffa8eff3-988a-4fe5-93b9-371636a0ae8f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:14:44 crc kubenswrapper[4619]: I0126 11:14:44.935126 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3d77756-36ba-479e-8688-779283522d80-kube-api-access-zffk7" (OuterVolumeSpecName: "kube-api-access-zffk7") pod "b3d77756-36ba-479e-8688-779283522d80" (UID: "b3d77756-36ba-479e-8688-779283522d80"). InnerVolumeSpecName "kube-api-access-zffk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:14:44 crc kubenswrapper[4619]: I0126 11:14:44.938065 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffa8eff3-988a-4fe5-93b9-371636a0ae8f-kube-api-access-prj89" (OuterVolumeSpecName: "kube-api-access-prj89") pod "ffa8eff3-988a-4fe5-93b9-371636a0ae8f" (UID: "ffa8eff3-988a-4fe5-93b9-371636a0ae8f"). InnerVolumeSpecName "kube-api-access-prj89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.022782 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6d7e-account-create-update-xmjsb" event={"ID":"cb30d86a-e144-4072-821a-f159e5dbdf31","Type":"ContainerDied","Data":"3a6c8c5abcd5a88424c9c2d035cd9de9e5e8f0d3beddc27b4fc7f664ad38fed9"} Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.022832 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a6c8c5abcd5a88424c9c2d035cd9de9e5e8f0d3beddc27b4fc7f664ad38fed9" Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.022905 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6d7e-account-create-update-xmjsb" Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.025392 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15c51dad-c61f-4c0d-91e3-6d0054c521c4","Type":"ContainerStarted","Data":"6b9c6110377facfbc99a64ca650d47c1c9664ef84bd9a6b414c061428258e2eb"} Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.026181 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55wdv\" (UniqueName: \"kubernetes.io/projected/cb30d86a-e144-4072-821a-f159e5dbdf31-kube-api-access-55wdv\") pod \"cb30d86a-e144-4072-821a-f159e5dbdf31\" (UID: \"cb30d86a-e144-4072-821a-f159e5dbdf31\") " Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.026310 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb30d86a-e144-4072-821a-f159e5dbdf31-operator-scripts\") pod \"cb30d86a-e144-4072-821a-f159e5dbdf31\" (UID: \"cb30d86a-e144-4072-821a-f159e5dbdf31\") " Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.026431 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cqmc\" (UniqueName: \"kubernetes.io/projected/b1bb0260-95f5-41fd-b051-0f122151a9c0-kube-api-access-8cqmc\") pod \"b1bb0260-95f5-41fd-b051-0f122151a9c0\" (UID: \"b1bb0260-95f5-41fd-b051-0f122151a9c0\") " Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.026458 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1bb0260-95f5-41fd-b051-0f122151a9c0-operator-scripts\") pod \"b1bb0260-95f5-41fd-b051-0f122151a9c0\" (UID: \"b1bb0260-95f5-41fd-b051-0f122151a9c0\") " Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.026821 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zffk7\" (UniqueName: \"kubernetes.io/projected/b3d77756-36ba-479e-8688-779283522d80-kube-api-access-zffk7\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.026837 4619 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffa8eff3-988a-4fe5-93b9-371636a0ae8f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.026846 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prj89\" (UniqueName: \"kubernetes.io/projected/ffa8eff3-988a-4fe5-93b9-371636a0ae8f-kube-api-access-prj89\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.026855 4619 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3d77756-36ba-479e-8688-779283522d80-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.027203 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1bb0260-95f5-41fd-b051-0f122151a9c0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b1bb0260-95f5-41fd-b051-0f122151a9c0" (UID: "b1bb0260-95f5-41fd-b051-0f122151a9c0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.027953 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb30d86a-e144-4072-821a-f159e5dbdf31-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cb30d86a-e144-4072-821a-f159e5dbdf31" (UID: "cb30d86a-e144-4072-821a-f159e5dbdf31"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.029178 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6717-account-create-update-h2wzj" event={"ID":"b1bb0260-95f5-41fd-b051-0f122151a9c0","Type":"ContainerDied","Data":"96680228bf02ebdd907416ad1c63ee86f2aadba2285b5f509add8631dd40e190"} Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.029213 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96680228bf02ebdd907416ad1c63ee86f2aadba2285b5f509add8631dd40e190" Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.029263 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6717-account-create-update-h2wzj" Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.031569 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-n9nnv" event={"ID":"b3d77756-36ba-479e-8688-779283522d80","Type":"ContainerDied","Data":"3a49fad480b06194490b994acc32a024b1496ed6af497acb57dd9fcd75e1e691"} Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.031591 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a49fad480b06194490b994acc32a024b1496ed6af497acb57dd9fcd75e1e691" Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.031672 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-n9nnv" Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.032572 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb30d86a-e144-4072-821a-f159e5dbdf31-kube-api-access-55wdv" (OuterVolumeSpecName: "kube-api-access-55wdv") pod "cb30d86a-e144-4072-821a-f159e5dbdf31" (UID: "cb30d86a-e144-4072-821a-f159e5dbdf31"). InnerVolumeSpecName "kube-api-access-55wdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.032756 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1bb0260-95f5-41fd-b051-0f122151a9c0-kube-api-access-8cqmc" (OuterVolumeSpecName: "kube-api-access-8cqmc") pod "b1bb0260-95f5-41fd-b051-0f122151a9c0" (UID: "b1bb0260-95f5-41fd-b051-0f122151a9c0"). InnerVolumeSpecName "kube-api-access-8cqmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.038069 4619 generic.go:334] "Generic (PLEG): container finished" podID="f33a41bb-6406-4c73-8024-4acd72817832" containerID="acb7965272930c0e5aeb32299fd66f4070cac2661e0eb68cc61aedd3e0ea08f9" exitCode=0 Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.038132 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerDied","Data":"acb7965272930c0e5aeb32299fd66f4070cac2661e0eb68cc61aedd3e0ea08f9"} Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.038161 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerStarted","Data":"a97c96f04b0e6279913de2f2c440a695c44ec0545531f13754f0d90f3a1c8d9f"} Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.038176 4619 scope.go:117] "RemoveContainer" containerID="1c10eca96de3abc38af2c9c686eee98e2b56a0138fc48edb624175300fb0caff" Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.042442 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9c25-account-create-update-sgdh4" event={"ID":"ffa8eff3-988a-4fe5-93b9-371636a0ae8f","Type":"ContainerDied","Data":"979476f1d1a73a926dfe18573dea3239254d61ae0abb82c2d984e6487cfe702c"} Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.042475 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="979476f1d1a73a926dfe18573dea3239254d61ae0abb82c2d984e6487cfe702c" Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.042532 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9c25-account-create-update-sgdh4" Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.054776 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-g8k9m" Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.055415 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-g8k9m" event={"ID":"21d35b94-5e1e-4fd2-a2d7-40ca92101a54","Type":"ContainerDied","Data":"b457fec3ebf135250c9f47bdaecd31b392cc2cbf2e95d8de727713611ddfb381"} Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.055444 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b457fec3ebf135250c9f47bdaecd31b392cc2cbf2e95d8de727713611ddfb381" Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.128211 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cqmc\" (UniqueName: \"kubernetes.io/projected/b1bb0260-95f5-41fd-b051-0f122151a9c0-kube-api-access-8cqmc\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.128242 4619 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1bb0260-95f5-41fd-b051-0f122151a9c0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.128252 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55wdv\" (UniqueName: \"kubernetes.io/projected/cb30d86a-e144-4072-821a-f159e5dbdf31-kube-api-access-55wdv\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:45 crc kubenswrapper[4619]: I0126 11:14:45.128261 4619 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb30d86a-e144-4072-821a-f159e5dbdf31-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:14:46 crc kubenswrapper[4619]: I0126 11:14:46.078562 4619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 11:14:46 crc kubenswrapper[4619]: I0126 11:14:46.079571 4619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 11:14:46 crc kubenswrapper[4619]: I0126 11:14:46.078869 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15c51dad-c61f-4c0d-91e3-6d0054c521c4","Type":"ContainerStarted","Data":"a6a96720d71fecfd8d60340b138fee63e5979b4d7048fef1269cac27ee40da0a"} Jan 26 11:14:46 crc kubenswrapper[4619]: I0126 11:14:46.326448 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 11:14:46 crc kubenswrapper[4619]: I0126 11:14:46.326489 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 11:14:46 crc kubenswrapper[4619]: I0126 11:14:46.379491 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 11:14:46 crc kubenswrapper[4619]: I0126 11:14:46.380478 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 11:14:47 crc kubenswrapper[4619]: I0126 11:14:47.090125 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15c51dad-c61f-4c0d-91e3-6d0054c521c4","Type":"ContainerStarted","Data":"7d5a632a25007bbaefde167cd02f9b3fc3a17e6f92b4a8a7adb09c3957779ad1"} Jan 26 11:14:47 crc kubenswrapper[4619]: I0126 11:14:47.091779 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 11:14:47 crc kubenswrapper[4619]: I0126 11:14:47.091818 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 11:14:47 crc kubenswrapper[4619]: I0126 11:14:47.454582 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 11:14:47 crc kubenswrapper[4619]: I0126 11:14:47.454706 4619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 11:14:47 crc kubenswrapper[4619]: I0126 11:14:47.461249 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 11:14:48 crc kubenswrapper[4619]: I0126 11:14:48.109291 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15c51dad-c61f-4c0d-91e3-6d0054c521c4","Type":"ContainerStarted","Data":"272538ae2f4c5b0b78c9c6823d1b776f7e185b8f47aede45f841e1117583b9fc"} Jan 26 11:14:48 crc kubenswrapper[4619]: I0126 11:14:48.109745 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 11:14:48 crc kubenswrapper[4619]: I0126 11:14:48.136524 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.60680103 podStartE2EDuration="5.1365064s" podCreationTimestamp="2026-01-26 11:14:43 +0000 UTC" firstStartedPulling="2026-01-26 11:14:43.980648944 +0000 UTC m=+1183.014689660" lastFinishedPulling="2026-01-26 11:14:47.510354314 +0000 UTC m=+1186.544395030" observedRunningTime="2026-01-26 11:14:48.132252582 +0000 UTC m=+1187.166293298" watchObservedRunningTime="2026-01-26 11:14:48.1365064 +0000 UTC m=+1187.170547116" Jan 26 11:14:48 crc kubenswrapper[4619]: I0126 11:14:48.325724 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6f67c775d4-7ls4r" Jan 26 11:14:48 crc kubenswrapper[4619]: I0126 11:14:48.354161 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-846d64d6c4-66jvl" Jan 26 11:14:48 crc kubenswrapper[4619]: I0126 11:14:48.497275 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:14:49 crc kubenswrapper[4619]: I0126 11:14:49.116348 4619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 11:14:49 crc kubenswrapper[4619]: I0126 11:14:49.116378 4619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 11:14:49 crc kubenswrapper[4619]: I0126 11:14:49.858097 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 11:14:50 crc kubenswrapper[4619]: I0126 11:14:50.029797 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 11:14:50 crc kubenswrapper[4619]: I0126 11:14:50.128086 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15c51dad-c61f-4c0d-91e3-6d0054c521c4" containerName="ceilometer-central-agent" containerID="cri-o://6b9c6110377facfbc99a64ca650d47c1c9664ef84bd9a6b414c061428258e2eb" gracePeriod=30 Jan 26 11:14:50 crc kubenswrapper[4619]: I0126 11:14:50.128195 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15c51dad-c61f-4c0d-91e3-6d0054c521c4" containerName="proxy-httpd" containerID="cri-o://272538ae2f4c5b0b78c9c6823d1b776f7e185b8f47aede45f841e1117583b9fc" gracePeriod=30 Jan 26 11:14:50 crc kubenswrapper[4619]: I0126 11:14:50.128245 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15c51dad-c61f-4c0d-91e3-6d0054c521c4" containerName="sg-core" containerID="cri-o://7d5a632a25007bbaefde167cd02f9b3fc3a17e6f92b4a8a7adb09c3957779ad1" gracePeriod=30 Jan 26 11:14:50 crc kubenswrapper[4619]: I0126 11:14:50.128279 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15c51dad-c61f-4c0d-91e3-6d0054c521c4" containerName="ceilometer-notification-agent" containerID="cri-o://a6a96720d71fecfd8d60340b138fee63e5979b4d7048fef1269cac27ee40da0a" gracePeriod=30 Jan 26 11:14:50 crc kubenswrapper[4619]: I0126 11:14:50.383943 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-846d64d6c4-66jvl" Jan 26 11:14:50 crc kubenswrapper[4619]: I0126 11:14:50.472842 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f67c775d4-7ls4r"] Jan 26 11:14:50 crc kubenswrapper[4619]: I0126 11:14:50.473068 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6f67c775d4-7ls4r" podUID="670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" containerName="horizon-log" containerID="cri-o://ada190b479ee52f8303b817e9c1c2701293e633d99dd5836167d714d09c747ba" gracePeriod=30 Jan 26 11:14:50 crc kubenswrapper[4619]: I0126 11:14:50.473185 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6f67c775d4-7ls4r" podUID="670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" containerName="horizon" containerID="cri-o://b56fd8a7dbb2c8b1978a088101e7acddc67948bd399d538c2472832c6cffbd25" gracePeriod=30 Jan 26 11:14:50 crc kubenswrapper[4619]: I0126 11:14:50.480468 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6f67c775d4-7ls4r" podUID="670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.452687 4619 generic.go:334] "Generic (PLEG): container finished" podID="15c51dad-c61f-4c0d-91e3-6d0054c521c4" containerID="272538ae2f4c5b0b78c9c6823d1b776f7e185b8f47aede45f841e1117583b9fc" exitCode=0 Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.453169 4619 generic.go:334] "Generic (PLEG): container finished" podID="15c51dad-c61f-4c0d-91e3-6d0054c521c4" containerID="7d5a632a25007bbaefde167cd02f9b3fc3a17e6f92b4a8a7adb09c3957779ad1" exitCode=2 Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.453203 4619 generic.go:334] "Generic (PLEG): container finished" podID="15c51dad-c61f-4c0d-91e3-6d0054c521c4" containerID="a6a96720d71fecfd8d60340b138fee63e5979b4d7048fef1269cac27ee40da0a" exitCode=0 Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.463787 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15c51dad-c61f-4c0d-91e3-6d0054c521c4","Type":"ContainerDied","Data":"272538ae2f4c5b0b78c9c6823d1b776f7e185b8f47aede45f841e1117583b9fc"} Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.463867 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15c51dad-c61f-4c0d-91e3-6d0054c521c4","Type":"ContainerDied","Data":"7d5a632a25007bbaefde167cd02f9b3fc3a17e6f92b4a8a7adb09c3957779ad1"} Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.463883 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15c51dad-c61f-4c0d-91e3-6d0054c521c4","Type":"ContainerDied","Data":"a6a96720d71fecfd8d60340b138fee63e5979b4d7048fef1269cac27ee40da0a"} Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.512811 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zjsfd"] Jan 26 11:14:51 crc kubenswrapper[4619]: E0126 11:14:51.513988 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21d35b94-5e1e-4fd2-a2d7-40ca92101a54" containerName="mariadb-database-create" Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.514238 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="21d35b94-5e1e-4fd2-a2d7-40ca92101a54" containerName="mariadb-database-create" Jan 26 11:14:51 crc kubenswrapper[4619]: E0126 11:14:51.514318 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffa8eff3-988a-4fe5-93b9-371636a0ae8f" containerName="mariadb-account-create-update" Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.514386 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffa8eff3-988a-4fe5-93b9-371636a0ae8f" containerName="mariadb-account-create-update" Jan 26 11:14:51 crc kubenswrapper[4619]: E0126 11:14:51.514457 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1bb0260-95f5-41fd-b051-0f122151a9c0" containerName="mariadb-account-create-update" Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.514508 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1bb0260-95f5-41fd-b051-0f122151a9c0" containerName="mariadb-account-create-update" Jan 26 11:14:51 crc kubenswrapper[4619]: E0126 11:14:51.514609 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d77756-36ba-479e-8688-779283522d80" containerName="mariadb-database-create" Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.514696 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d77756-36ba-479e-8688-779283522d80" containerName="mariadb-database-create" Jan 26 11:14:51 crc kubenswrapper[4619]: E0126 11:14:51.514803 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb30d86a-e144-4072-821a-f159e5dbdf31" containerName="mariadb-account-create-update" Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.514926 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb30d86a-e144-4072-821a-f159e5dbdf31" containerName="mariadb-account-create-update" Jan 26 11:14:51 crc kubenswrapper[4619]: E0126 11:14:51.515051 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3168c764-1c23-47f7-ad80-20fe2f860ffd" containerName="mariadb-database-create" Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.515126 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="3168c764-1c23-47f7-ad80-20fe2f860ffd" containerName="mariadb-database-create" Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.515554 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d77756-36ba-479e-8688-779283522d80" containerName="mariadb-database-create" Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.515660 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb30d86a-e144-4072-821a-f159e5dbdf31" containerName="mariadb-account-create-update" Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.515729 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1bb0260-95f5-41fd-b051-0f122151a9c0" containerName="mariadb-account-create-update" Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.515836 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="21d35b94-5e1e-4fd2-a2d7-40ca92101a54" containerName="mariadb-database-create" Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.515904 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="3168c764-1c23-47f7-ad80-20fe2f860ffd" containerName="mariadb-database-create" Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.515973 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffa8eff3-988a-4fe5-93b9-371636a0ae8f" containerName="mariadb-account-create-update" Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.525610 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zjsfd" Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.531035 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.531277 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.531387 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-g5zfp" Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.533027 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zjsfd"] Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.692769 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5f7823e-9371-4a62-b554-9b30b3bb3483-config-data\") pod \"nova-cell0-conductor-db-sync-zjsfd\" (UID: \"b5f7823e-9371-4a62-b554-9b30b3bb3483\") " pod="openstack/nova-cell0-conductor-db-sync-zjsfd" Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.693172 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt7s5\" (UniqueName: \"kubernetes.io/projected/b5f7823e-9371-4a62-b554-9b30b3bb3483-kube-api-access-pt7s5\") pod \"nova-cell0-conductor-db-sync-zjsfd\" (UID: \"b5f7823e-9371-4a62-b554-9b30b3bb3483\") " pod="openstack/nova-cell0-conductor-db-sync-zjsfd" Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.693340 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5f7823e-9371-4a62-b554-9b30b3bb3483-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zjsfd\" (UID: \"b5f7823e-9371-4a62-b554-9b30b3bb3483\") " pod="openstack/nova-cell0-conductor-db-sync-zjsfd" Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.693444 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5f7823e-9371-4a62-b554-9b30b3bb3483-scripts\") pod \"nova-cell0-conductor-db-sync-zjsfd\" (UID: \"b5f7823e-9371-4a62-b554-9b30b3bb3483\") " pod="openstack/nova-cell0-conductor-db-sync-zjsfd" Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.795755 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5f7823e-9371-4a62-b554-9b30b3bb3483-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zjsfd\" (UID: \"b5f7823e-9371-4a62-b554-9b30b3bb3483\") " pod="openstack/nova-cell0-conductor-db-sync-zjsfd" Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.795804 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5f7823e-9371-4a62-b554-9b30b3bb3483-scripts\") pod \"nova-cell0-conductor-db-sync-zjsfd\" (UID: \"b5f7823e-9371-4a62-b554-9b30b3bb3483\") " pod="openstack/nova-cell0-conductor-db-sync-zjsfd" Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.795831 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5f7823e-9371-4a62-b554-9b30b3bb3483-config-data\") pod \"nova-cell0-conductor-db-sync-zjsfd\" (UID: \"b5f7823e-9371-4a62-b554-9b30b3bb3483\") " pod="openstack/nova-cell0-conductor-db-sync-zjsfd" Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.795900 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt7s5\" (UniqueName: \"kubernetes.io/projected/b5f7823e-9371-4a62-b554-9b30b3bb3483-kube-api-access-pt7s5\") pod \"nova-cell0-conductor-db-sync-zjsfd\" (UID: \"b5f7823e-9371-4a62-b554-9b30b3bb3483\") " pod="openstack/nova-cell0-conductor-db-sync-zjsfd" Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.803469 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5f7823e-9371-4a62-b554-9b30b3bb3483-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zjsfd\" (UID: \"b5f7823e-9371-4a62-b554-9b30b3bb3483\") " pod="openstack/nova-cell0-conductor-db-sync-zjsfd" Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.804018 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5f7823e-9371-4a62-b554-9b30b3bb3483-config-data\") pod \"nova-cell0-conductor-db-sync-zjsfd\" (UID: \"b5f7823e-9371-4a62-b554-9b30b3bb3483\") " pod="openstack/nova-cell0-conductor-db-sync-zjsfd" Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.818077 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5f7823e-9371-4a62-b554-9b30b3bb3483-scripts\") pod \"nova-cell0-conductor-db-sync-zjsfd\" (UID: \"b5f7823e-9371-4a62-b554-9b30b3bb3483\") " pod="openstack/nova-cell0-conductor-db-sync-zjsfd" Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.823600 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt7s5\" (UniqueName: \"kubernetes.io/projected/b5f7823e-9371-4a62-b554-9b30b3bb3483-kube-api-access-pt7s5\") pod \"nova-cell0-conductor-db-sync-zjsfd\" (UID: \"b5f7823e-9371-4a62-b554-9b30b3bb3483\") " pod="openstack/nova-cell0-conductor-db-sync-zjsfd" Jan 26 11:14:51 crc kubenswrapper[4619]: I0126 11:14:51.854496 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zjsfd" Jan 26 11:14:52 crc kubenswrapper[4619]: I0126 11:14:52.347449 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zjsfd"] Jan 26 11:14:52 crc kubenswrapper[4619]: W0126 11:14:52.349129 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5f7823e_9371_4a62_b554_9b30b3bb3483.slice/crio-593bc229c06471823ad4e68bb175f1625a6de92ff97104d9c4347dc97a50550f WatchSource:0}: Error finding container 593bc229c06471823ad4e68bb175f1625a6de92ff97104d9c4347dc97a50550f: Status 404 returned error can't find the container with id 593bc229c06471823ad4e68bb175f1625a6de92ff97104d9c4347dc97a50550f Jan 26 11:14:52 crc kubenswrapper[4619]: I0126 11:14:52.463170 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zjsfd" event={"ID":"b5f7823e-9371-4a62-b554-9b30b3bb3483","Type":"ContainerStarted","Data":"593bc229c06471823ad4e68bb175f1625a6de92ff97104d9c4347dc97a50550f"} Jan 26 11:14:54 crc kubenswrapper[4619]: I0126 11:14:54.885758 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6f67c775d4-7ls4r" podUID="670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:49114->10.217.0.151:8443: read: connection reset by peer" Jan 26 11:14:55 crc kubenswrapper[4619]: I0126 11:14:55.102963 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6f67c775d4-7ls4r" podUID="670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 26 11:14:55 crc kubenswrapper[4619]: I0126 11:14:55.485924 4619 generic.go:334] "Generic (PLEG): container finished" podID="670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" containerID="b56fd8a7dbb2c8b1978a088101e7acddc67948bd399d538c2472832c6cffbd25" exitCode=0 Jan 26 11:14:55 crc kubenswrapper[4619]: I0126 11:14:55.485967 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f67c775d4-7ls4r" event={"ID":"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d","Type":"ContainerDied","Data":"b56fd8a7dbb2c8b1978a088101e7acddc67948bd399d538c2472832c6cffbd25"} Jan 26 11:14:55 crc kubenswrapper[4619]: I0126 11:14:55.486005 4619 scope.go:117] "RemoveContainer" containerID="b0e2cbc3edeffa1b4639dbc8cde1e089d22e52bccc9a66bfa5d58fe57443d2fd" Jan 26 11:14:57 crc kubenswrapper[4619]: I0126 11:14:57.507673 4619 generic.go:334] "Generic (PLEG): container finished" podID="15c51dad-c61f-4c0d-91e3-6d0054c521c4" containerID="6b9c6110377facfbc99a64ca650d47c1c9664ef84bd9a6b414c061428258e2eb" exitCode=0 Jan 26 11:14:57 crc kubenswrapper[4619]: I0126 11:14:57.507753 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15c51dad-c61f-4c0d-91e3-6d0054c521c4","Type":"ContainerDied","Data":"6b9c6110377facfbc99a64ca650d47c1c9664ef84bd9a6b414c061428258e2eb"} Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.093979 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.188007 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490435-dgpsd"] Jan 26 11:15:00 crc kubenswrapper[4619]: E0126 11:15:00.188481 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15c51dad-c61f-4c0d-91e3-6d0054c521c4" containerName="proxy-httpd" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.188499 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="15c51dad-c61f-4c0d-91e3-6d0054c521c4" containerName="proxy-httpd" Jan 26 11:15:00 crc kubenswrapper[4619]: E0126 11:15:00.188514 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15c51dad-c61f-4c0d-91e3-6d0054c521c4" containerName="ceilometer-central-agent" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.188523 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="15c51dad-c61f-4c0d-91e3-6d0054c521c4" containerName="ceilometer-central-agent" Jan 26 11:15:00 crc kubenswrapper[4619]: E0126 11:15:00.188536 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15c51dad-c61f-4c0d-91e3-6d0054c521c4" containerName="sg-core" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.188544 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="15c51dad-c61f-4c0d-91e3-6d0054c521c4" containerName="sg-core" Jan 26 11:15:00 crc kubenswrapper[4619]: E0126 11:15:00.188565 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15c51dad-c61f-4c0d-91e3-6d0054c521c4" containerName="ceilometer-notification-agent" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.188573 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="15c51dad-c61f-4c0d-91e3-6d0054c521c4" containerName="ceilometer-notification-agent" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.188848 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="15c51dad-c61f-4c0d-91e3-6d0054c521c4" containerName="ceilometer-notification-agent" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.188864 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="15c51dad-c61f-4c0d-91e3-6d0054c521c4" containerName="ceilometer-central-agent" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.188893 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="15c51dad-c61f-4c0d-91e3-6d0054c521c4" containerName="sg-core" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.188909 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="15c51dad-c61f-4c0d-91e3-6d0054c521c4" containerName="proxy-httpd" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.189575 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-dgpsd" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.211625 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.211908 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.254783 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15c51dad-c61f-4c0d-91e3-6d0054c521c4-scripts\") pod \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\" (UID: \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\") " Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.254858 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15c51dad-c61f-4c0d-91e3-6d0054c521c4-run-httpd\") pod \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\" (UID: \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\") " Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.254957 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15c51dad-c61f-4c0d-91e3-6d0054c521c4-combined-ca-bundle\") pod \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\" (UID: \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\") " Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.255348 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15c51dad-c61f-4c0d-91e3-6d0054c521c4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "15c51dad-c61f-4c0d-91e3-6d0054c521c4" (UID: "15c51dad-c61f-4c0d-91e3-6d0054c521c4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.255602 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15c51dad-c61f-4c0d-91e3-6d0054c521c4-log-httpd\") pod \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\" (UID: \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\") " Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.255716 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nh9n\" (UniqueName: \"kubernetes.io/projected/15c51dad-c61f-4c0d-91e3-6d0054c521c4-kube-api-access-6nh9n\") pod \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\" (UID: \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\") " Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.255766 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15c51dad-c61f-4c0d-91e3-6d0054c521c4-sg-core-conf-yaml\") pod \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\" (UID: \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\") " Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.255803 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15c51dad-c61f-4c0d-91e3-6d0054c521c4-config-data\") pod \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\" (UID: \"15c51dad-c61f-4c0d-91e3-6d0054c521c4\") " Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.256990 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7zrp\" (UniqueName: \"kubernetes.io/projected/0f339338-c587-4b52-98f1-44b46fab9b40-kube-api-access-z7zrp\") pod \"collect-profiles-29490435-dgpsd\" (UID: \"0f339338-c587-4b52-98f1-44b46fab9b40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-dgpsd" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.257123 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f339338-c587-4b52-98f1-44b46fab9b40-secret-volume\") pod \"collect-profiles-29490435-dgpsd\" (UID: \"0f339338-c587-4b52-98f1-44b46fab9b40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-dgpsd" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.257163 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15c51dad-c61f-4c0d-91e3-6d0054c521c4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "15c51dad-c61f-4c0d-91e3-6d0054c521c4" (UID: "15c51dad-c61f-4c0d-91e3-6d0054c521c4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.257482 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f339338-c587-4b52-98f1-44b46fab9b40-config-volume\") pod \"collect-profiles-29490435-dgpsd\" (UID: \"0f339338-c587-4b52-98f1-44b46fab9b40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-dgpsd" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.257664 4619 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15c51dad-c61f-4c0d-91e3-6d0054c521c4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.257678 4619 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15c51dad-c61f-4c0d-91e3-6d0054c521c4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.259961 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490435-dgpsd"] Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.270682 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15c51dad-c61f-4c0d-91e3-6d0054c521c4-kube-api-access-6nh9n" (OuterVolumeSpecName: "kube-api-access-6nh9n") pod "15c51dad-c61f-4c0d-91e3-6d0054c521c4" (UID: "15c51dad-c61f-4c0d-91e3-6d0054c521c4"). InnerVolumeSpecName "kube-api-access-6nh9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.277275 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15c51dad-c61f-4c0d-91e3-6d0054c521c4-scripts" (OuterVolumeSpecName: "scripts") pod "15c51dad-c61f-4c0d-91e3-6d0054c521c4" (UID: "15c51dad-c61f-4c0d-91e3-6d0054c521c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.288433 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15c51dad-c61f-4c0d-91e3-6d0054c521c4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "15c51dad-c61f-4c0d-91e3-6d0054c521c4" (UID: "15c51dad-c61f-4c0d-91e3-6d0054c521c4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.357769 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15c51dad-c61f-4c0d-91e3-6d0054c521c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15c51dad-c61f-4c0d-91e3-6d0054c521c4" (UID: "15c51dad-c61f-4c0d-91e3-6d0054c521c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.359280 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7zrp\" (UniqueName: \"kubernetes.io/projected/0f339338-c587-4b52-98f1-44b46fab9b40-kube-api-access-z7zrp\") pod \"collect-profiles-29490435-dgpsd\" (UID: \"0f339338-c587-4b52-98f1-44b46fab9b40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-dgpsd" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.359390 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f339338-c587-4b52-98f1-44b46fab9b40-secret-volume\") pod \"collect-profiles-29490435-dgpsd\" (UID: \"0f339338-c587-4b52-98f1-44b46fab9b40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-dgpsd" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.359570 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f339338-c587-4b52-98f1-44b46fab9b40-config-volume\") pod \"collect-profiles-29490435-dgpsd\" (UID: \"0f339338-c587-4b52-98f1-44b46fab9b40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-dgpsd" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.359652 4619 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15c51dad-c61f-4c0d-91e3-6d0054c521c4-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.359669 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15c51dad-c61f-4c0d-91e3-6d0054c521c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.359682 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nh9n\" (UniqueName: \"kubernetes.io/projected/15c51dad-c61f-4c0d-91e3-6d0054c521c4-kube-api-access-6nh9n\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.359694 4619 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15c51dad-c61f-4c0d-91e3-6d0054c521c4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.360591 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f339338-c587-4b52-98f1-44b46fab9b40-config-volume\") pod \"collect-profiles-29490435-dgpsd\" (UID: \"0f339338-c587-4b52-98f1-44b46fab9b40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-dgpsd" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.363467 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f339338-c587-4b52-98f1-44b46fab9b40-secret-volume\") pod \"collect-profiles-29490435-dgpsd\" (UID: \"0f339338-c587-4b52-98f1-44b46fab9b40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-dgpsd" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.367748 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15c51dad-c61f-4c0d-91e3-6d0054c521c4-config-data" (OuterVolumeSpecName: "config-data") pod "15c51dad-c61f-4c0d-91e3-6d0054c521c4" (UID: "15c51dad-c61f-4c0d-91e3-6d0054c521c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.379084 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7zrp\" (UniqueName: \"kubernetes.io/projected/0f339338-c587-4b52-98f1-44b46fab9b40-kube-api-access-z7zrp\") pod \"collect-profiles-29490435-dgpsd\" (UID: \"0f339338-c587-4b52-98f1-44b46fab9b40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-dgpsd" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.461469 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15c51dad-c61f-4c0d-91e3-6d0054c521c4-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.534538 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15c51dad-c61f-4c0d-91e3-6d0054c521c4","Type":"ContainerDied","Data":"a84d8b594dfb456491faf1a233f5638d89133da5e21d8033e5847eed7dc62b78"} Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.534583 4619 scope.go:117] "RemoveContainer" containerID="272538ae2f4c5b0b78c9c6823d1b776f7e185b8f47aede45f841e1117583b9fc" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.534722 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.540862 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zjsfd" event={"ID":"b5f7823e-9371-4a62-b554-9b30b3bb3483","Type":"ContainerStarted","Data":"bd886717fa35c8e8478a4f3b20f429dfeab42c99c0bf638b7d19f0f79b3a8c4b"} Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.550296 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-dgpsd" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.562828 4619 scope.go:117] "RemoveContainer" containerID="7d5a632a25007bbaefde167cd02f9b3fc3a17e6f92b4a8a7adb09c3957779ad1" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.578088 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-zjsfd" podStartSLOduration=1.9690487 podStartE2EDuration="9.578070366s" podCreationTimestamp="2026-01-26 11:14:51 +0000 UTC" firstStartedPulling="2026-01-26 11:14:52.352421519 +0000 UTC m=+1191.386462255" lastFinishedPulling="2026-01-26 11:14:59.961443205 +0000 UTC m=+1198.995483921" observedRunningTime="2026-01-26 11:15:00.559352478 +0000 UTC m=+1199.593393184" watchObservedRunningTime="2026-01-26 11:15:00.578070366 +0000 UTC m=+1199.612111082" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.605184 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.618662 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.619493 4619 scope.go:117] "RemoveContainer" containerID="a6a96720d71fecfd8d60340b138fee63e5979b4d7048fef1269cac27ee40da0a" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.640712 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.642917 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.647237 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.670394 4619 scope.go:117] "RemoveContainer" containerID="6b9c6110377facfbc99a64ca650d47c1c9664ef84bd9a6b414c061428258e2eb" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.675378 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-config-data\") pod \"ceilometer-0\" (UID: \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\") " pod="openstack/ceilometer-0" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.675398 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.675413 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\") " pod="openstack/ceilometer-0" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.675443 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnlb7\" (UniqueName: \"kubernetes.io/projected/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-kube-api-access-xnlb7\") pod \"ceilometer-0\" (UID: \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\") " pod="openstack/ceilometer-0" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.675484 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-scripts\") pod \"ceilometer-0\" (UID: \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\") " pod="openstack/ceilometer-0" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.675522 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\") " pod="openstack/ceilometer-0" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.675561 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-run-httpd\") pod \"ceilometer-0\" (UID: \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\") " pod="openstack/ceilometer-0" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.675569 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.675627 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-log-httpd\") pod \"ceilometer-0\" (UID: \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\") " pod="openstack/ceilometer-0" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.779167 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-config-data\") pod \"ceilometer-0\" (UID: \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\") " pod="openstack/ceilometer-0" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.779445 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\") " pod="openstack/ceilometer-0" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.779477 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnlb7\" (UniqueName: \"kubernetes.io/projected/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-kube-api-access-xnlb7\") pod \"ceilometer-0\" (UID: \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\") " pod="openstack/ceilometer-0" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.779525 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-scripts\") pod \"ceilometer-0\" (UID: \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\") " pod="openstack/ceilometer-0" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.779578 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\") " pod="openstack/ceilometer-0" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.779635 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-run-httpd\") pod \"ceilometer-0\" (UID: \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\") " pod="openstack/ceilometer-0" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.779674 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-log-httpd\") pod \"ceilometer-0\" (UID: \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\") " pod="openstack/ceilometer-0" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.780293 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-log-httpd\") pod \"ceilometer-0\" (UID: \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\") " pod="openstack/ceilometer-0" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.783532 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-run-httpd\") pod \"ceilometer-0\" (UID: \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\") " pod="openstack/ceilometer-0" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.789163 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-config-data\") pod \"ceilometer-0\" (UID: \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\") " pod="openstack/ceilometer-0" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.796165 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\") " pod="openstack/ceilometer-0" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.824018 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnlb7\" (UniqueName: \"kubernetes.io/projected/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-kube-api-access-xnlb7\") pod \"ceilometer-0\" (UID: \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\") " pod="openstack/ceilometer-0" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.836103 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-scripts\") pod \"ceilometer-0\" (UID: \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\") " pod="openstack/ceilometer-0" Jan 26 11:15:00 crc kubenswrapper[4619]: I0126 11:15:00.845941 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\") " pod="openstack/ceilometer-0" Jan 26 11:15:01 crc kubenswrapper[4619]: I0126 11:15:01.016391 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:15:01 crc kubenswrapper[4619]: I0126 11:15:01.199814 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490435-dgpsd"] Jan 26 11:15:01 crc kubenswrapper[4619]: I0126 11:15:01.276778 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15c51dad-c61f-4c0d-91e3-6d0054c521c4" path="/var/lib/kubelet/pods/15c51dad-c61f-4c0d-91e3-6d0054c521c4/volumes" Jan 26 11:15:01 crc kubenswrapper[4619]: I0126 11:15:01.492117 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:15:01 crc kubenswrapper[4619]: I0126 11:15:01.551045 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7067fd1d-9a57-41ce-9dae-c4d7b143ff53","Type":"ContainerStarted","Data":"128eda70000d11564e95cac7bfe036e0e6cd04b4ba44cff338c6204caf69c0bf"} Jan 26 11:15:01 crc kubenswrapper[4619]: I0126 11:15:01.553791 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-dgpsd" event={"ID":"0f339338-c587-4b52-98f1-44b46fab9b40","Type":"ContainerStarted","Data":"ada707d0a1cfec1326004e3932d9ede1e233ce7e2e505b9477b1c8a72bebef90"} Jan 26 11:15:01 crc kubenswrapper[4619]: I0126 11:15:01.553863 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-dgpsd" event={"ID":"0f339338-c587-4b52-98f1-44b46fab9b40","Type":"ContainerStarted","Data":"6847d1808158b9c2434b087817e0708ae749ef081dd7b93c34d48f5ec65848be"} Jan 26 11:15:01 crc kubenswrapper[4619]: I0126 11:15:01.575860 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-dgpsd" podStartSLOduration=1.575842277 podStartE2EDuration="1.575842277s" podCreationTimestamp="2026-01-26 11:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:15:01.575218069 +0000 UTC m=+1200.609258785" watchObservedRunningTime="2026-01-26 11:15:01.575842277 +0000 UTC m=+1200.609882993" Jan 26 11:15:02 crc kubenswrapper[4619]: I0126 11:15:02.581575 4619 generic.go:334] "Generic (PLEG): container finished" podID="0f339338-c587-4b52-98f1-44b46fab9b40" containerID="ada707d0a1cfec1326004e3932d9ede1e233ce7e2e505b9477b1c8a72bebef90" exitCode=0 Jan 26 11:15:02 crc kubenswrapper[4619]: I0126 11:15:02.581661 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-dgpsd" event={"ID":"0f339338-c587-4b52-98f1-44b46fab9b40","Type":"ContainerDied","Data":"ada707d0a1cfec1326004e3932d9ede1e233ce7e2e505b9477b1c8a72bebef90"} Jan 26 11:15:02 crc kubenswrapper[4619]: I0126 11:15:02.768623 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:15:03 crc kubenswrapper[4619]: I0126 11:15:03.594335 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7067fd1d-9a57-41ce-9dae-c4d7b143ff53","Type":"ContainerStarted","Data":"2c0f46f6e861df70a19f1abde1563e6c6e56140a5a6c6b99c7e23b207e99eddb"} Jan 26 11:15:03 crc kubenswrapper[4619]: I0126 11:15:03.594863 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7067fd1d-9a57-41ce-9dae-c4d7b143ff53","Type":"ContainerStarted","Data":"8e0c9dc4d6ea7d6df01a9dfe5f9c9c033c80c38829125e7a986c20bb4d80a7a2"} Jan 26 11:15:04 crc kubenswrapper[4619]: I0126 11:15:04.003642 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-dgpsd" Jan 26 11:15:04 crc kubenswrapper[4619]: I0126 11:15:04.074522 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f339338-c587-4b52-98f1-44b46fab9b40-secret-volume\") pod \"0f339338-c587-4b52-98f1-44b46fab9b40\" (UID: \"0f339338-c587-4b52-98f1-44b46fab9b40\") " Jan 26 11:15:04 crc kubenswrapper[4619]: I0126 11:15:04.074583 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f339338-c587-4b52-98f1-44b46fab9b40-config-volume\") pod \"0f339338-c587-4b52-98f1-44b46fab9b40\" (UID: \"0f339338-c587-4b52-98f1-44b46fab9b40\") " Jan 26 11:15:04 crc kubenswrapper[4619]: I0126 11:15:04.074654 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7zrp\" (UniqueName: \"kubernetes.io/projected/0f339338-c587-4b52-98f1-44b46fab9b40-kube-api-access-z7zrp\") pod \"0f339338-c587-4b52-98f1-44b46fab9b40\" (UID: \"0f339338-c587-4b52-98f1-44b46fab9b40\") " Jan 26 11:15:04 crc kubenswrapper[4619]: I0126 11:15:04.076639 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f339338-c587-4b52-98f1-44b46fab9b40-config-volume" (OuterVolumeSpecName: "config-volume") pod "0f339338-c587-4b52-98f1-44b46fab9b40" (UID: "0f339338-c587-4b52-98f1-44b46fab9b40"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:15:04 crc kubenswrapper[4619]: I0126 11:15:04.092515 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f339338-c587-4b52-98f1-44b46fab9b40-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0f339338-c587-4b52-98f1-44b46fab9b40" (UID: "0f339338-c587-4b52-98f1-44b46fab9b40"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:04 crc kubenswrapper[4619]: I0126 11:15:04.092593 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f339338-c587-4b52-98f1-44b46fab9b40-kube-api-access-z7zrp" (OuterVolumeSpecName: "kube-api-access-z7zrp") pod "0f339338-c587-4b52-98f1-44b46fab9b40" (UID: "0f339338-c587-4b52-98f1-44b46fab9b40"). InnerVolumeSpecName "kube-api-access-z7zrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:15:04 crc kubenswrapper[4619]: I0126 11:15:04.177770 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7zrp\" (UniqueName: \"kubernetes.io/projected/0f339338-c587-4b52-98f1-44b46fab9b40-kube-api-access-z7zrp\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:04 crc kubenswrapper[4619]: I0126 11:15:04.177800 4619 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f339338-c587-4b52-98f1-44b46fab9b40-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:04 crc kubenswrapper[4619]: I0126 11:15:04.177809 4619 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f339338-c587-4b52-98f1-44b46fab9b40-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:04 crc kubenswrapper[4619]: I0126 11:15:04.613371 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-dgpsd" Jan 26 11:15:04 crc kubenswrapper[4619]: I0126 11:15:04.614053 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-dgpsd" event={"ID":"0f339338-c587-4b52-98f1-44b46fab9b40","Type":"ContainerDied","Data":"6847d1808158b9c2434b087817e0708ae749ef081dd7b93c34d48f5ec65848be"} Jan 26 11:15:04 crc kubenswrapper[4619]: I0126 11:15:04.615606 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6847d1808158b9c2434b087817e0708ae749ef081dd7b93c34d48f5ec65848be" Jan 26 11:15:04 crc kubenswrapper[4619]: I0126 11:15:04.617765 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7067fd1d-9a57-41ce-9dae-c4d7b143ff53","Type":"ContainerStarted","Data":"a3c596e5ac9b06f6e2edaa59196e5a3178a7a0bda700aebccc55838788faf32a"} Jan 26 11:15:05 crc kubenswrapper[4619]: I0126 11:15:05.103511 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6f67c775d4-7ls4r" podUID="670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 26 11:15:05 crc kubenswrapper[4619]: I0126 11:15:05.630742 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7067fd1d-9a57-41ce-9dae-c4d7b143ff53","Type":"ContainerStarted","Data":"426ad69375c536898c39d20f34d4396bd871ee09f518d8bb8bbd6dc3d42a1a0e"} Jan 26 11:15:05 crc kubenswrapper[4619]: I0126 11:15:05.631206 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 11:15:05 crc kubenswrapper[4619]: I0126 11:15:05.631098 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7067fd1d-9a57-41ce-9dae-c4d7b143ff53" containerName="proxy-httpd" containerID="cri-o://426ad69375c536898c39d20f34d4396bd871ee09f518d8bb8bbd6dc3d42a1a0e" gracePeriod=30 Jan 26 11:15:05 crc kubenswrapper[4619]: I0126 11:15:05.630971 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7067fd1d-9a57-41ce-9dae-c4d7b143ff53" containerName="ceilometer-central-agent" containerID="cri-o://8e0c9dc4d6ea7d6df01a9dfe5f9c9c033c80c38829125e7a986c20bb4d80a7a2" gracePeriod=30 Jan 26 11:15:05 crc kubenswrapper[4619]: I0126 11:15:05.631148 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7067fd1d-9a57-41ce-9dae-c4d7b143ff53" containerName="ceilometer-notification-agent" containerID="cri-o://2c0f46f6e861df70a19f1abde1563e6c6e56140a5a6c6b99c7e23b207e99eddb" gracePeriod=30 Jan 26 11:15:05 crc kubenswrapper[4619]: I0126 11:15:05.631159 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7067fd1d-9a57-41ce-9dae-c4d7b143ff53" containerName="sg-core" containerID="cri-o://a3c596e5ac9b06f6e2edaa59196e5a3178a7a0bda700aebccc55838788faf32a" gracePeriod=30 Jan 26 11:15:05 crc kubenswrapper[4619]: I0126 11:15:05.661075 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.785243557 podStartE2EDuration="5.661060065s" podCreationTimestamp="2026-01-26 11:15:00 +0000 UTC" firstStartedPulling="2026-01-26 11:15:01.497924258 +0000 UTC m=+1200.531964974" lastFinishedPulling="2026-01-26 11:15:05.373740766 +0000 UTC m=+1204.407781482" observedRunningTime="2026-01-26 11:15:05.658129654 +0000 UTC m=+1204.692170370" watchObservedRunningTime="2026-01-26 11:15:05.661060065 +0000 UTC m=+1204.695100781" Jan 26 11:15:06 crc kubenswrapper[4619]: I0126 11:15:06.639911 4619 generic.go:334] "Generic (PLEG): container finished" podID="7067fd1d-9a57-41ce-9dae-c4d7b143ff53" containerID="a3c596e5ac9b06f6e2edaa59196e5a3178a7a0bda700aebccc55838788faf32a" exitCode=2 Jan 26 11:15:06 crc kubenswrapper[4619]: I0126 11:15:06.640297 4619 generic.go:334] "Generic (PLEG): container finished" podID="7067fd1d-9a57-41ce-9dae-c4d7b143ff53" containerID="2c0f46f6e861df70a19f1abde1563e6c6e56140a5a6c6b99c7e23b207e99eddb" exitCode=0 Jan 26 11:15:06 crc kubenswrapper[4619]: I0126 11:15:06.640332 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7067fd1d-9a57-41ce-9dae-c4d7b143ff53","Type":"ContainerDied","Data":"a3c596e5ac9b06f6e2edaa59196e5a3178a7a0bda700aebccc55838788faf32a"} Jan 26 11:15:06 crc kubenswrapper[4619]: I0126 11:15:06.640374 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7067fd1d-9a57-41ce-9dae-c4d7b143ff53","Type":"ContainerDied","Data":"2c0f46f6e861df70a19f1abde1563e6c6e56140a5a6c6b99c7e23b207e99eddb"} Jan 26 11:15:11 crc kubenswrapper[4619]: I0126 11:15:11.714789 4619 generic.go:334] "Generic (PLEG): container finished" podID="7067fd1d-9a57-41ce-9dae-c4d7b143ff53" containerID="8e0c9dc4d6ea7d6df01a9dfe5f9c9c033c80c38829125e7a986c20bb4d80a7a2" exitCode=0 Jan 26 11:15:11 crc kubenswrapper[4619]: I0126 11:15:11.714850 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7067fd1d-9a57-41ce-9dae-c4d7b143ff53","Type":"ContainerDied","Data":"8e0c9dc4d6ea7d6df01a9dfe5f9c9c033c80c38829125e7a986c20bb4d80a7a2"} Jan 26 11:15:12 crc kubenswrapper[4619]: I0126 11:15:12.724238 4619 generic.go:334] "Generic (PLEG): container finished" podID="b5f7823e-9371-4a62-b554-9b30b3bb3483" containerID="bd886717fa35c8e8478a4f3b20f429dfeab42c99c0bf638b7d19f0f79b3a8c4b" exitCode=0 Jan 26 11:15:12 crc kubenswrapper[4619]: I0126 11:15:12.724284 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zjsfd" event={"ID":"b5f7823e-9371-4a62-b554-9b30b3bb3483","Type":"ContainerDied","Data":"bd886717fa35c8e8478a4f3b20f429dfeab42c99c0bf638b7d19f0f79b3a8c4b"} Jan 26 11:15:14 crc kubenswrapper[4619]: I0126 11:15:14.122768 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zjsfd" Jan 26 11:15:14 crc kubenswrapper[4619]: I0126 11:15:14.293699 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt7s5\" (UniqueName: \"kubernetes.io/projected/b5f7823e-9371-4a62-b554-9b30b3bb3483-kube-api-access-pt7s5\") pod \"b5f7823e-9371-4a62-b554-9b30b3bb3483\" (UID: \"b5f7823e-9371-4a62-b554-9b30b3bb3483\") " Jan 26 11:15:14 crc kubenswrapper[4619]: I0126 11:15:14.293760 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5f7823e-9371-4a62-b554-9b30b3bb3483-combined-ca-bundle\") pod \"b5f7823e-9371-4a62-b554-9b30b3bb3483\" (UID: \"b5f7823e-9371-4a62-b554-9b30b3bb3483\") " Jan 26 11:15:14 crc kubenswrapper[4619]: I0126 11:15:14.293828 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5f7823e-9371-4a62-b554-9b30b3bb3483-config-data\") pod \"b5f7823e-9371-4a62-b554-9b30b3bb3483\" (UID: \"b5f7823e-9371-4a62-b554-9b30b3bb3483\") " Jan 26 11:15:14 crc kubenswrapper[4619]: I0126 11:15:14.293879 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5f7823e-9371-4a62-b554-9b30b3bb3483-scripts\") pod \"b5f7823e-9371-4a62-b554-9b30b3bb3483\" (UID: \"b5f7823e-9371-4a62-b554-9b30b3bb3483\") " Jan 26 11:15:14 crc kubenswrapper[4619]: I0126 11:15:14.299758 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5f7823e-9371-4a62-b554-9b30b3bb3483-scripts" (OuterVolumeSpecName: "scripts") pod "b5f7823e-9371-4a62-b554-9b30b3bb3483" (UID: "b5f7823e-9371-4a62-b554-9b30b3bb3483"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:14 crc kubenswrapper[4619]: I0126 11:15:14.299766 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5f7823e-9371-4a62-b554-9b30b3bb3483-kube-api-access-pt7s5" (OuterVolumeSpecName: "kube-api-access-pt7s5") pod "b5f7823e-9371-4a62-b554-9b30b3bb3483" (UID: "b5f7823e-9371-4a62-b554-9b30b3bb3483"). InnerVolumeSpecName "kube-api-access-pt7s5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:15:14 crc kubenswrapper[4619]: I0126 11:15:14.321882 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5f7823e-9371-4a62-b554-9b30b3bb3483-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5f7823e-9371-4a62-b554-9b30b3bb3483" (UID: "b5f7823e-9371-4a62-b554-9b30b3bb3483"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:14 crc kubenswrapper[4619]: I0126 11:15:14.322305 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5f7823e-9371-4a62-b554-9b30b3bb3483-config-data" (OuterVolumeSpecName: "config-data") pod "b5f7823e-9371-4a62-b554-9b30b3bb3483" (UID: "b5f7823e-9371-4a62-b554-9b30b3bb3483"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:14 crc kubenswrapper[4619]: I0126 11:15:14.396198 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5f7823e-9371-4a62-b554-9b30b3bb3483-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:14 crc kubenswrapper[4619]: I0126 11:15:14.396226 4619 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5f7823e-9371-4a62-b554-9b30b3bb3483-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:14 crc kubenswrapper[4619]: I0126 11:15:14.396238 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt7s5\" (UniqueName: \"kubernetes.io/projected/b5f7823e-9371-4a62-b554-9b30b3bb3483-kube-api-access-pt7s5\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:14 crc kubenswrapper[4619]: I0126 11:15:14.396248 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5f7823e-9371-4a62-b554-9b30b3bb3483-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:14 crc kubenswrapper[4619]: I0126 11:15:14.744800 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zjsfd" event={"ID":"b5f7823e-9371-4a62-b554-9b30b3bb3483","Type":"ContainerDied","Data":"593bc229c06471823ad4e68bb175f1625a6de92ff97104d9c4347dc97a50550f"} Jan 26 11:15:14 crc kubenswrapper[4619]: I0126 11:15:14.745116 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="593bc229c06471823ad4e68bb175f1625a6de92ff97104d9c4347dc97a50550f" Jan 26 11:15:14 crc kubenswrapper[4619]: I0126 11:15:14.744895 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zjsfd" Jan 26 11:15:14 crc kubenswrapper[4619]: I0126 11:15:14.882567 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 11:15:14 crc kubenswrapper[4619]: E0126 11:15:14.882918 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f339338-c587-4b52-98f1-44b46fab9b40" containerName="collect-profiles" Jan 26 11:15:14 crc kubenswrapper[4619]: I0126 11:15:14.882934 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f339338-c587-4b52-98f1-44b46fab9b40" containerName="collect-profiles" Jan 26 11:15:14 crc kubenswrapper[4619]: E0126 11:15:14.882964 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f7823e-9371-4a62-b554-9b30b3bb3483" containerName="nova-cell0-conductor-db-sync" Jan 26 11:15:14 crc kubenswrapper[4619]: I0126 11:15:14.882971 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f7823e-9371-4a62-b554-9b30b3bb3483" containerName="nova-cell0-conductor-db-sync" Jan 26 11:15:14 crc kubenswrapper[4619]: I0126 11:15:14.883140 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f339338-c587-4b52-98f1-44b46fab9b40" containerName="collect-profiles" Jan 26 11:15:14 crc kubenswrapper[4619]: I0126 11:15:14.883161 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5f7823e-9371-4a62-b554-9b30b3bb3483" containerName="nova-cell0-conductor-db-sync" Jan 26 11:15:14 crc kubenswrapper[4619]: I0126 11:15:14.884208 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 11:15:14 crc kubenswrapper[4619]: I0126 11:15:14.889023 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-g5zfp" Jan 26 11:15:14 crc kubenswrapper[4619]: I0126 11:15:14.893132 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 11:15:14 crc kubenswrapper[4619]: I0126 11:15:14.925158 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 11:15:15 crc kubenswrapper[4619]: I0126 11:15:15.015331 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50392ffb-8c95-4c47-97e9-03d27141e8e8-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"50392ffb-8c95-4c47-97e9-03d27141e8e8\") " pod="openstack/nova-cell0-conductor-0" Jan 26 11:15:15 crc kubenswrapper[4619]: I0126 11:15:15.016021 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqdpb\" (UniqueName: \"kubernetes.io/projected/50392ffb-8c95-4c47-97e9-03d27141e8e8-kube-api-access-cqdpb\") pod \"nova-cell0-conductor-0\" (UID: \"50392ffb-8c95-4c47-97e9-03d27141e8e8\") " pod="openstack/nova-cell0-conductor-0" Jan 26 11:15:15 crc kubenswrapper[4619]: I0126 11:15:15.016241 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50392ffb-8c95-4c47-97e9-03d27141e8e8-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"50392ffb-8c95-4c47-97e9-03d27141e8e8\") " pod="openstack/nova-cell0-conductor-0" Jan 26 11:15:15 crc kubenswrapper[4619]: I0126 11:15:15.104711 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6f67c775d4-7ls4r" podUID="670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 26 11:15:15 crc kubenswrapper[4619]: I0126 11:15:15.117934 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50392ffb-8c95-4c47-97e9-03d27141e8e8-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"50392ffb-8c95-4c47-97e9-03d27141e8e8\") " pod="openstack/nova-cell0-conductor-0" Jan 26 11:15:15 crc kubenswrapper[4619]: I0126 11:15:15.117984 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqdpb\" (UniqueName: \"kubernetes.io/projected/50392ffb-8c95-4c47-97e9-03d27141e8e8-kube-api-access-cqdpb\") pod \"nova-cell0-conductor-0\" (UID: \"50392ffb-8c95-4c47-97e9-03d27141e8e8\") " pod="openstack/nova-cell0-conductor-0" Jan 26 11:15:15 crc kubenswrapper[4619]: I0126 11:15:15.118011 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50392ffb-8c95-4c47-97e9-03d27141e8e8-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"50392ffb-8c95-4c47-97e9-03d27141e8e8\") " pod="openstack/nova-cell0-conductor-0" Jan 26 11:15:15 crc kubenswrapper[4619]: I0126 11:15:15.125586 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50392ffb-8c95-4c47-97e9-03d27141e8e8-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"50392ffb-8c95-4c47-97e9-03d27141e8e8\") " pod="openstack/nova-cell0-conductor-0" Jan 26 11:15:15 crc kubenswrapper[4619]: I0126 11:15:15.132423 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50392ffb-8c95-4c47-97e9-03d27141e8e8-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"50392ffb-8c95-4c47-97e9-03d27141e8e8\") " pod="openstack/nova-cell0-conductor-0" Jan 26 11:15:15 crc kubenswrapper[4619]: I0126 11:15:15.136954 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqdpb\" (UniqueName: \"kubernetes.io/projected/50392ffb-8c95-4c47-97e9-03d27141e8e8-kube-api-access-cqdpb\") pod \"nova-cell0-conductor-0\" (UID: \"50392ffb-8c95-4c47-97e9-03d27141e8e8\") " pod="openstack/nova-cell0-conductor-0" Jan 26 11:15:15 crc kubenswrapper[4619]: I0126 11:15:15.235416 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 11:15:15 crc kubenswrapper[4619]: I0126 11:15:15.711253 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 11:15:15 crc kubenswrapper[4619]: I0126 11:15:15.759714 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"50392ffb-8c95-4c47-97e9-03d27141e8e8","Type":"ContainerStarted","Data":"976a4f91c3e8ca70a3fe3c36c9b3f73f6d93919c6b606d86e74842fd483c96d7"} Jan 26 11:15:16 crc kubenswrapper[4619]: I0126 11:15:16.789196 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"50392ffb-8c95-4c47-97e9-03d27141e8e8","Type":"ContainerStarted","Data":"8cabf5c705f035b189d4cba4911afe5fe1bb2e76762d9728db20b82d27ae5e61"} Jan 26 11:15:16 crc kubenswrapper[4619]: I0126 11:15:16.790159 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 26 11:15:16 crc kubenswrapper[4619]: I0126 11:15:16.819796 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.819769354 podStartE2EDuration="2.819769354s" podCreationTimestamp="2026-01-26 11:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:15:16.812083081 +0000 UTC m=+1215.846123807" watchObservedRunningTime="2026-01-26 11:15:16.819769354 +0000 UTC m=+1215.853810110" Jan 26 11:15:20 crc kubenswrapper[4619]: I0126 11:15:20.269635 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 26 11:15:20 crc kubenswrapper[4619]: I0126 11:15:20.762929 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-s6glk"] Jan 26 11:15:20 crc kubenswrapper[4619]: I0126 11:15:20.764297 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-s6glk" Jan 26 11:15:20 crc kubenswrapper[4619]: I0126 11:15:20.767440 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 26 11:15:20 crc kubenswrapper[4619]: I0126 11:15:20.767731 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 26 11:15:20 crc kubenswrapper[4619]: I0126 11:15:20.786314 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-s6glk"] Jan 26 11:15:20 crc kubenswrapper[4619]: I0126 11:15:20.896899 4619 generic.go:334] "Generic (PLEG): container finished" podID="670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" containerID="ada190b479ee52f8303b817e9c1c2701293e633d99dd5836167d714d09c747ba" exitCode=137 Jan 26 11:15:20 crc kubenswrapper[4619]: I0126 11:15:20.896947 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f67c775d4-7ls4r" event={"ID":"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d","Type":"ContainerDied","Data":"ada190b479ee52f8303b817e9c1c2701293e633d99dd5836167d714d09c747ba"} Jan 26 11:15:20 crc kubenswrapper[4619]: I0126 11:15:20.896974 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f67c775d4-7ls4r" event={"ID":"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d","Type":"ContainerDied","Data":"171560876c1d8f44e444fd18c5acf7dfab2a0b5de3201237ebc1eab5f610b708"} Jan 26 11:15:20 crc kubenswrapper[4619]: I0126 11:15:20.896987 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="171560876c1d8f44e444fd18c5acf7dfab2a0b5de3201237ebc1eab5f610b708" Jan 26 11:15:20 crc kubenswrapper[4619]: I0126 11:15:20.947273 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f67c775d4-7ls4r" Jan 26 11:15:20 crc kubenswrapper[4619]: I0126 11:15:20.974712 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 11:15:20 crc kubenswrapper[4619]: E0126 11:15:20.975083 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" containerName="horizon" Jan 26 11:15:20 crc kubenswrapper[4619]: I0126 11:15:20.975095 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" containerName="horizon" Jan 26 11:15:20 crc kubenswrapper[4619]: E0126 11:15:20.975109 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" containerName="horizon-log" Jan 26 11:15:20 crc kubenswrapper[4619]: I0126 11:15:20.975114 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" containerName="horizon-log" Jan 26 11:15:20 crc kubenswrapper[4619]: I0126 11:15:20.975288 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" containerName="horizon" Jan 26 11:15:20 crc kubenswrapper[4619]: I0126 11:15:20.975307 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" containerName="horizon-log" Jan 26 11:15:20 crc kubenswrapper[4619]: I0126 11:15:20.975314 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" containerName="horizon" Jan 26 11:15:20 crc kubenswrapper[4619]: E0126 11:15:20.975487 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" containerName="horizon" Jan 26 11:15:20 crc kubenswrapper[4619]: I0126 11:15:20.975495 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" containerName="horizon" Jan 26 11:15:20 crc kubenswrapper[4619]: I0126 11:15:20.981207 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6460284b-1cb6-444b-a2f7-676f38e03a78-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-s6glk\" (UID: \"6460284b-1cb6-444b-a2f7-676f38e03a78\") " pod="openstack/nova-cell0-cell-mapping-s6glk" Jan 26 11:15:20 crc kubenswrapper[4619]: I0126 11:15:20.981279 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5785\" (UniqueName: \"kubernetes.io/projected/6460284b-1cb6-444b-a2f7-676f38e03a78-kube-api-access-t5785\") pod \"nova-cell0-cell-mapping-s6glk\" (UID: \"6460284b-1cb6-444b-a2f7-676f38e03a78\") " pod="openstack/nova-cell0-cell-mapping-s6glk" Jan 26 11:15:20 crc kubenswrapper[4619]: I0126 11:15:20.981309 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6460284b-1cb6-444b-a2f7-676f38e03a78-scripts\") pod \"nova-cell0-cell-mapping-s6glk\" (UID: \"6460284b-1cb6-444b-a2f7-676f38e03a78\") " pod="openstack/nova-cell0-cell-mapping-s6glk" Jan 26 11:15:20 crc kubenswrapper[4619]: I0126 11:15:20.981333 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6460284b-1cb6-444b-a2f7-676f38e03a78-config-data\") pod \"nova-cell0-cell-mapping-s6glk\" (UID: \"6460284b-1cb6-444b-a2f7-676f38e03a78\") " pod="openstack/nova-cell0-cell-mapping-s6glk" Jan 26 11:15:20 crc kubenswrapper[4619]: I0126 11:15:20.982027 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 11:15:20 crc kubenswrapper[4619]: I0126 11:15:20.986593 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.007129 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.070640 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.072040 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.079411 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.083337 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-logs\") pod \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\" (UID: \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\") " Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.083431 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-horizon-tls-certs\") pod \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\" (UID: \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\") " Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.083570 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-config-data\") pod \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\" (UID: \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\") " Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.083594 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-horizon-secret-key\") pod \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\" (UID: \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\") " Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.083650 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-668ht\" (UniqueName: \"kubernetes.io/projected/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-kube-api-access-668ht\") pod \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\" (UID: \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\") " Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.083725 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-scripts\") pod \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\" (UID: \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\") " Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.083747 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-combined-ca-bundle\") pod \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\" (UID: \"670c0ff7-8d41-4dc2-81d7-b64d24b11d3d\") " Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.084003 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aedaf21e-3f5f-44ab-a83f-b12671950ae2-config-data\") pod \"nova-api-0\" (UID: \"aedaf21e-3f5f-44ab-a83f-b12671950ae2\") " pod="openstack/nova-api-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.084032 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6460284b-1cb6-444b-a2f7-676f38e03a78-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-s6glk\" (UID: \"6460284b-1cb6-444b-a2f7-676f38e03a78\") " pod="openstack/nova-cell0-cell-mapping-s6glk" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.084074 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5785\" (UniqueName: \"kubernetes.io/projected/6460284b-1cb6-444b-a2f7-676f38e03a78-kube-api-access-t5785\") pod \"nova-cell0-cell-mapping-s6glk\" (UID: \"6460284b-1cb6-444b-a2f7-676f38e03a78\") " pod="openstack/nova-cell0-cell-mapping-s6glk" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.084115 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aedaf21e-3f5f-44ab-a83f-b12671950ae2-logs\") pod \"nova-api-0\" (UID: \"aedaf21e-3f5f-44ab-a83f-b12671950ae2\") " pod="openstack/nova-api-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.084135 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6460284b-1cb6-444b-a2f7-676f38e03a78-scripts\") pod \"nova-cell0-cell-mapping-s6glk\" (UID: \"6460284b-1cb6-444b-a2f7-676f38e03a78\") " pod="openstack/nova-cell0-cell-mapping-s6glk" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.084126 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-logs" (OuterVolumeSpecName: "logs") pod "670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" (UID: "670c0ff7-8d41-4dc2-81d7-b64d24b11d3d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.093522 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ms9s\" (UniqueName: \"kubernetes.io/projected/aedaf21e-3f5f-44ab-a83f-b12671950ae2-kube-api-access-6ms9s\") pod \"nova-api-0\" (UID: \"aedaf21e-3f5f-44ab-a83f-b12671950ae2\") " pod="openstack/nova-api-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.093681 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6460284b-1cb6-444b-a2f7-676f38e03a78-config-data\") pod \"nova-cell0-cell-mapping-s6glk\" (UID: \"6460284b-1cb6-444b-a2f7-676f38e03a78\") " pod="openstack/nova-cell0-cell-mapping-s6glk" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.094015 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aedaf21e-3f5f-44ab-a83f-b12671950ae2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aedaf21e-3f5f-44ab-a83f-b12671950ae2\") " pod="openstack/nova-api-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.094187 4619 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.100604 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6460284b-1cb6-444b-a2f7-676f38e03a78-scripts\") pod \"nova-cell0-cell-mapping-s6glk\" (UID: \"6460284b-1cb6-444b-a2f7-676f38e03a78\") " pod="openstack/nova-cell0-cell-mapping-s6glk" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.101259 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6460284b-1cb6-444b-a2f7-676f38e03a78-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-s6glk\" (UID: \"6460284b-1cb6-444b-a2f7-676f38e03a78\") " pod="openstack/nova-cell0-cell-mapping-s6glk" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.106932 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" (UID: "670c0ff7-8d41-4dc2-81d7-b64d24b11d3d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.107698 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6460284b-1cb6-444b-a2f7-676f38e03a78-config-data\") pod \"nova-cell0-cell-mapping-s6glk\" (UID: \"6460284b-1cb6-444b-a2f7-676f38e03a78\") " pod="openstack/nova-cell0-cell-mapping-s6glk" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.107858 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-kube-api-access-668ht" (OuterVolumeSpecName: "kube-api-access-668ht") pod "670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" (UID: "670c0ff7-8d41-4dc2-81d7-b64d24b11d3d"). InnerVolumeSpecName "kube-api-access-668ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.109895 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.130000 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.146602 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.157001 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.158363 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5785\" (UniqueName: \"kubernetes.io/projected/6460284b-1cb6-444b-a2f7-676f38e03a78-kube-api-access-t5785\") pod \"nova-cell0-cell-mapping-s6glk\" (UID: \"6460284b-1cb6-444b-a2f7-676f38e03a78\") " pod="openstack/nova-cell0-cell-mapping-s6glk" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.193917 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" (UID: "670c0ff7-8d41-4dc2-81d7-b64d24b11d3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.194471 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-scripts" (OuterVolumeSpecName: "scripts") pod "670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" (UID: "670c0ff7-8d41-4dc2-81d7-b64d24b11d3d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.195964 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f44896e9-4021-4f5d-90be-c7b4680bb230-logs\") pod \"nova-metadata-0\" (UID: \"f44896e9-4021-4f5d-90be-c7b4680bb230\") " pod="openstack/nova-metadata-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.195999 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aedaf21e-3f5f-44ab-a83f-b12671950ae2-config-data\") pod \"nova-api-0\" (UID: \"aedaf21e-3f5f-44ab-a83f-b12671950ae2\") " pod="openstack/nova-api-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.196074 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88b8c21e-24e1-4aeb-9da6-629bf3336e35-config-data\") pod \"nova-scheduler-0\" (UID: \"88b8c21e-24e1-4aeb-9da6-629bf3336e35\") " pod="openstack/nova-scheduler-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.196113 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aedaf21e-3f5f-44ab-a83f-b12671950ae2-logs\") pod \"nova-api-0\" (UID: \"aedaf21e-3f5f-44ab-a83f-b12671950ae2\") " pod="openstack/nova-api-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.196571 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ms9s\" (UniqueName: \"kubernetes.io/projected/aedaf21e-3f5f-44ab-a83f-b12671950ae2-kube-api-access-6ms9s\") pod \"nova-api-0\" (UID: \"aedaf21e-3f5f-44ab-a83f-b12671950ae2\") " pod="openstack/nova-api-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.196640 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88b8c21e-24e1-4aeb-9da6-629bf3336e35-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"88b8c21e-24e1-4aeb-9da6-629bf3336e35\") " pod="openstack/nova-scheduler-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.196672 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64zdd\" (UniqueName: \"kubernetes.io/projected/88b8c21e-24e1-4aeb-9da6-629bf3336e35-kube-api-access-64zdd\") pod \"nova-scheduler-0\" (UID: \"88b8c21e-24e1-4aeb-9da6-629bf3336e35\") " pod="openstack/nova-scheduler-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.196694 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aedaf21e-3f5f-44ab-a83f-b12671950ae2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aedaf21e-3f5f-44ab-a83f-b12671950ae2\") " pod="openstack/nova-api-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.196719 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f44896e9-4021-4f5d-90be-c7b4680bb230-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f44896e9-4021-4f5d-90be-c7b4680bb230\") " pod="openstack/nova-metadata-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.196748 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4r95\" (UniqueName: \"kubernetes.io/projected/f44896e9-4021-4f5d-90be-c7b4680bb230-kube-api-access-f4r95\") pod \"nova-metadata-0\" (UID: \"f44896e9-4021-4f5d-90be-c7b4680bb230\") " pod="openstack/nova-metadata-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.196767 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f44896e9-4021-4f5d-90be-c7b4680bb230-config-data\") pod \"nova-metadata-0\" (UID: \"f44896e9-4021-4f5d-90be-c7b4680bb230\") " pod="openstack/nova-metadata-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.196839 4619 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.196849 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.196860 4619 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.196870 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-668ht\" (UniqueName: \"kubernetes.io/projected/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-kube-api-access-668ht\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.197389 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aedaf21e-3f5f-44ab-a83f-b12671950ae2-logs\") pod \"nova-api-0\" (UID: \"aedaf21e-3f5f-44ab-a83f-b12671950ae2\") " pod="openstack/nova-api-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.199720 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.211333 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aedaf21e-3f5f-44ab-a83f-b12671950ae2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aedaf21e-3f5f-44ab-a83f-b12671950ae2\") " pod="openstack/nova-api-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.227237 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aedaf21e-3f5f-44ab-a83f-b12671950ae2-config-data\") pod \"nova-api-0\" (UID: \"aedaf21e-3f5f-44ab-a83f-b12671950ae2\") " pod="openstack/nova-api-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.230233 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-config-data" (OuterVolumeSpecName: "config-data") pod "670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" (UID: "670c0ff7-8d41-4dc2-81d7-b64d24b11d3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.235554 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-s6glk" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.295254 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ms9s\" (UniqueName: \"kubernetes.io/projected/aedaf21e-3f5f-44ab-a83f-b12671950ae2-kube-api-access-6ms9s\") pod \"nova-api-0\" (UID: \"aedaf21e-3f5f-44ab-a83f-b12671950ae2\") " pod="openstack/nova-api-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.310283 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88b8c21e-24e1-4aeb-9da6-629bf3336e35-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"88b8c21e-24e1-4aeb-9da6-629bf3336e35\") " pod="openstack/nova-scheduler-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.310359 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64zdd\" (UniqueName: \"kubernetes.io/projected/88b8c21e-24e1-4aeb-9da6-629bf3336e35-kube-api-access-64zdd\") pod \"nova-scheduler-0\" (UID: \"88b8c21e-24e1-4aeb-9da6-629bf3336e35\") " pod="openstack/nova-scheduler-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.310401 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f44896e9-4021-4f5d-90be-c7b4680bb230-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f44896e9-4021-4f5d-90be-c7b4680bb230\") " pod="openstack/nova-metadata-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.310443 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4r95\" (UniqueName: \"kubernetes.io/projected/f44896e9-4021-4f5d-90be-c7b4680bb230-kube-api-access-f4r95\") pod \"nova-metadata-0\" (UID: \"f44896e9-4021-4f5d-90be-c7b4680bb230\") " pod="openstack/nova-metadata-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.310469 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f44896e9-4021-4f5d-90be-c7b4680bb230-config-data\") pod \"nova-metadata-0\" (UID: \"f44896e9-4021-4f5d-90be-c7b4680bb230\") " pod="openstack/nova-metadata-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.310541 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f44896e9-4021-4f5d-90be-c7b4680bb230-logs\") pod \"nova-metadata-0\" (UID: \"f44896e9-4021-4f5d-90be-c7b4680bb230\") " pod="openstack/nova-metadata-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.310783 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88b8c21e-24e1-4aeb-9da6-629bf3336e35-config-data\") pod \"nova-scheduler-0\" (UID: \"88b8c21e-24e1-4aeb-9da6-629bf3336e35\") " pod="openstack/nova-scheduler-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.310908 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.317405 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f44896e9-4021-4f5d-90be-c7b4680bb230-logs\") pod \"nova-metadata-0\" (UID: \"f44896e9-4021-4f5d-90be-c7b4680bb230\") " pod="openstack/nova-metadata-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.319149 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.333209 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f44896e9-4021-4f5d-90be-c7b4680bb230-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f44896e9-4021-4f5d-90be-c7b4680bb230\") " pod="openstack/nova-metadata-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.333376 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88b8c21e-24e1-4aeb-9da6-629bf3336e35-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"88b8c21e-24e1-4aeb-9da6-629bf3336e35\") " pod="openstack/nova-scheduler-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.353223 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4r95\" (UniqueName: \"kubernetes.io/projected/f44896e9-4021-4f5d-90be-c7b4680bb230-kube-api-access-f4r95\") pod \"nova-metadata-0\" (UID: \"f44896e9-4021-4f5d-90be-c7b4680bb230\") " pod="openstack/nova-metadata-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.362533 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.363858 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.364965 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88b8c21e-24e1-4aeb-9da6-629bf3336e35-config-data\") pod \"nova-scheduler-0\" (UID: \"88b8c21e-24e1-4aeb-9da6-629bf3336e35\") " pod="openstack/nova-scheduler-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.367940 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.374772 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f44896e9-4021-4f5d-90be-c7b4680bb230-config-data\") pod \"nova-metadata-0\" (UID: \"f44896e9-4021-4f5d-90be-c7b4680bb230\") " pod="openstack/nova-metadata-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.399193 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64zdd\" (UniqueName: \"kubernetes.io/projected/88b8c21e-24e1-4aeb-9da6-629bf3336e35-kube-api-access-64zdd\") pod \"nova-scheduler-0\" (UID: \"88b8c21e-24e1-4aeb-9da6-629bf3336e35\") " pod="openstack/nova-scheduler-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.399665 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" (UID: "670c0ff7-8d41-4dc2-81d7-b64d24b11d3d"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.404467 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.424811 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/278e84bc-0704-4e94-af43-82b629145b59-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"278e84bc-0704-4e94-af43-82b629145b59\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.424955 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k29q8\" (UniqueName: \"kubernetes.io/projected/278e84bc-0704-4e94-af43-82b629145b59-kube-api-access-k29q8\") pod \"nova-cell1-novncproxy-0\" (UID: \"278e84bc-0704-4e94-af43-82b629145b59\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.425065 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/278e84bc-0704-4e94-af43-82b629145b59-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"278e84bc-0704-4e94-af43-82b629145b59\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.425334 4619 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.425385 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.475362 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-zl7zs"] Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.483247 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.509933 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-zl7zs"] Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.532080 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6260663d-37a2-4439-9db3-d1b74606db09-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-zl7zs\" (UID: \"6260663d-37a2-4439-9db3-d1b74606db09\") " pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.532123 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6260663d-37a2-4439-9db3-d1b74606db09-dns-svc\") pod \"dnsmasq-dns-757b4f8459-zl7zs\" (UID: \"6260663d-37a2-4439-9db3-d1b74606db09\") " pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.532159 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k29q8\" (UniqueName: \"kubernetes.io/projected/278e84bc-0704-4e94-af43-82b629145b59-kube-api-access-k29q8\") pod \"nova-cell1-novncproxy-0\" (UID: \"278e84bc-0704-4e94-af43-82b629145b59\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.532181 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6260663d-37a2-4439-9db3-d1b74606db09-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-zl7zs\" (UID: \"6260663d-37a2-4439-9db3-d1b74606db09\") " pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.532210 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnm27\" (UniqueName: \"kubernetes.io/projected/6260663d-37a2-4439-9db3-d1b74606db09-kube-api-access-tnm27\") pod \"dnsmasq-dns-757b4f8459-zl7zs\" (UID: \"6260663d-37a2-4439-9db3-d1b74606db09\") " pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.532236 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/278e84bc-0704-4e94-af43-82b629145b59-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"278e84bc-0704-4e94-af43-82b629145b59\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.532262 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6260663d-37a2-4439-9db3-d1b74606db09-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-zl7zs\" (UID: \"6260663d-37a2-4439-9db3-d1b74606db09\") " pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.532331 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6260663d-37a2-4439-9db3-d1b74606db09-config\") pod \"dnsmasq-dns-757b4f8459-zl7zs\" (UID: \"6260663d-37a2-4439-9db3-d1b74606db09\") " pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.532361 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/278e84bc-0704-4e94-af43-82b629145b59-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"278e84bc-0704-4e94-af43-82b629145b59\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.540345 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/278e84bc-0704-4e94-af43-82b629145b59-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"278e84bc-0704-4e94-af43-82b629145b59\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.545770 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/278e84bc-0704-4e94-af43-82b629145b59-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"278e84bc-0704-4e94-af43-82b629145b59\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.559041 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.602181 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k29q8\" (UniqueName: \"kubernetes.io/projected/278e84bc-0704-4e94-af43-82b629145b59-kube-api-access-k29q8\") pod \"nova-cell1-novncproxy-0\" (UID: \"278e84bc-0704-4e94-af43-82b629145b59\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.635504 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6260663d-37a2-4439-9db3-d1b74606db09-config\") pod \"dnsmasq-dns-757b4f8459-zl7zs\" (UID: \"6260663d-37a2-4439-9db3-d1b74606db09\") " pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.635582 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6260663d-37a2-4439-9db3-d1b74606db09-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-zl7zs\" (UID: \"6260663d-37a2-4439-9db3-d1b74606db09\") " pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.635606 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6260663d-37a2-4439-9db3-d1b74606db09-dns-svc\") pod \"dnsmasq-dns-757b4f8459-zl7zs\" (UID: \"6260663d-37a2-4439-9db3-d1b74606db09\") " pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.635649 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6260663d-37a2-4439-9db3-d1b74606db09-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-zl7zs\" (UID: \"6260663d-37a2-4439-9db3-d1b74606db09\") " pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.635683 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnm27\" (UniqueName: \"kubernetes.io/projected/6260663d-37a2-4439-9db3-d1b74606db09-kube-api-access-tnm27\") pod \"dnsmasq-dns-757b4f8459-zl7zs\" (UID: \"6260663d-37a2-4439-9db3-d1b74606db09\") " pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.635721 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6260663d-37a2-4439-9db3-d1b74606db09-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-zl7zs\" (UID: \"6260663d-37a2-4439-9db3-d1b74606db09\") " pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.636539 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6260663d-37a2-4439-9db3-d1b74606db09-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-zl7zs\" (UID: \"6260663d-37a2-4439-9db3-d1b74606db09\") " pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.637065 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6260663d-37a2-4439-9db3-d1b74606db09-config\") pod \"dnsmasq-dns-757b4f8459-zl7zs\" (UID: \"6260663d-37a2-4439-9db3-d1b74606db09\") " pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.637639 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6260663d-37a2-4439-9db3-d1b74606db09-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-zl7zs\" (UID: \"6260663d-37a2-4439-9db3-d1b74606db09\") " pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.638130 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6260663d-37a2-4439-9db3-d1b74606db09-dns-svc\") pod \"dnsmasq-dns-757b4f8459-zl7zs\" (UID: \"6260663d-37a2-4439-9db3-d1b74606db09\") " pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.638722 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6260663d-37a2-4439-9db3-d1b74606db09-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-zl7zs\" (UID: \"6260663d-37a2-4439-9db3-d1b74606db09\") " pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.685911 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnm27\" (UniqueName: \"kubernetes.io/projected/6260663d-37a2-4439-9db3-d1b74606db09-kube-api-access-tnm27\") pod \"dnsmasq-dns-757b4f8459-zl7zs\" (UID: \"6260663d-37a2-4439-9db3-d1b74606db09\") " pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.732114 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.841041 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.924439 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f67c775d4-7ls4r" Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.974685 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f67c775d4-7ls4r"] Jan 26 11:15:21 crc kubenswrapper[4619]: I0126 11:15:21.988287 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6f67c775d4-7ls4r"] Jan 26 11:15:22 crc kubenswrapper[4619]: I0126 11:15:22.165939 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 11:15:22 crc kubenswrapper[4619]: I0126 11:15:22.208411 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:15:22 crc kubenswrapper[4619]: I0126 11:15:22.228079 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-s6glk"] Jan 26 11:15:22 crc kubenswrapper[4619]: W0126 11:15:22.436358 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf44896e9_4021_4f5d_90be_c7b4680bb230.slice/crio-3e50d2d98f33590d3afc78c4c0c2081c1462eae857147e17e4e45781b0ab246e WatchSource:0}: Error finding container 3e50d2d98f33590d3afc78c4c0c2081c1462eae857147e17e4e45781b0ab246e: Status 404 returned error can't find the container with id 3e50d2d98f33590d3afc78c4c0c2081c1462eae857147e17e4e45781b0ab246e Jan 26 11:15:22 crc kubenswrapper[4619]: I0126 11:15:22.438265 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:15:22 crc kubenswrapper[4619]: I0126 11:15:22.594042 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 11:15:22 crc kubenswrapper[4619]: I0126 11:15:22.630561 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-zl7zs"] Jan 26 11:15:22 crc kubenswrapper[4619]: I0126 11:15:22.751730 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-x6sj5"] Jan 26 11:15:22 crc kubenswrapper[4619]: I0126 11:15:22.753040 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-x6sj5" Jan 26 11:15:22 crc kubenswrapper[4619]: I0126 11:15:22.760683 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 26 11:15:22 crc kubenswrapper[4619]: I0126 11:15:22.760907 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 11:15:22 crc kubenswrapper[4619]: I0126 11:15:22.763596 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-x6sj5"] Jan 26 11:15:22 crc kubenswrapper[4619]: I0126 11:15:22.863891 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d93c669-300a-4954-b044-df49960ba3f0-config-data\") pod \"nova-cell1-conductor-db-sync-x6sj5\" (UID: \"8d93c669-300a-4954-b044-df49960ba3f0\") " pod="openstack/nova-cell1-conductor-db-sync-x6sj5" Jan 26 11:15:22 crc kubenswrapper[4619]: I0126 11:15:22.863984 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js7lf\" (UniqueName: \"kubernetes.io/projected/8d93c669-300a-4954-b044-df49960ba3f0-kube-api-access-js7lf\") pod \"nova-cell1-conductor-db-sync-x6sj5\" (UID: \"8d93c669-300a-4954-b044-df49960ba3f0\") " pod="openstack/nova-cell1-conductor-db-sync-x6sj5" Jan 26 11:15:22 crc kubenswrapper[4619]: I0126 11:15:22.864139 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d93c669-300a-4954-b044-df49960ba3f0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-x6sj5\" (UID: \"8d93c669-300a-4954-b044-df49960ba3f0\") " pod="openstack/nova-cell1-conductor-db-sync-x6sj5" Jan 26 11:15:22 crc kubenswrapper[4619]: I0126 11:15:22.864212 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d93c669-300a-4954-b044-df49960ba3f0-scripts\") pod \"nova-cell1-conductor-db-sync-x6sj5\" (UID: \"8d93c669-300a-4954-b044-df49960ba3f0\") " pod="openstack/nova-cell1-conductor-db-sync-x6sj5" Jan 26 11:15:22 crc kubenswrapper[4619]: I0126 11:15:22.935797 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"278e84bc-0704-4e94-af43-82b629145b59","Type":"ContainerStarted","Data":"16386aac70b08decbd0f9b131d78c6ae36c83f20e7e2c7c4bcd4603fb50812cc"} Jan 26 11:15:22 crc kubenswrapper[4619]: I0126 11:15:22.938194 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f44896e9-4021-4f5d-90be-c7b4680bb230","Type":"ContainerStarted","Data":"3e50d2d98f33590d3afc78c4c0c2081c1462eae857147e17e4e45781b0ab246e"} Jan 26 11:15:22 crc kubenswrapper[4619]: I0126 11:15:22.943482 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"88b8c21e-24e1-4aeb-9da6-629bf3336e35","Type":"ContainerStarted","Data":"1a364cc7bf95f23830e95e58a59919700cc7d9ee23eb4f9467cda41f46ccc740"} Jan 26 11:15:22 crc kubenswrapper[4619]: I0126 11:15:22.948222 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-s6glk" event={"ID":"6460284b-1cb6-444b-a2f7-676f38e03a78","Type":"ContainerStarted","Data":"570fc1861584d28033a84f8f3a28cfc3146ab376a6a71722dc1634536d4ecd74"} Jan 26 11:15:22 crc kubenswrapper[4619]: I0126 11:15:22.948254 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-s6glk" event={"ID":"6460284b-1cb6-444b-a2f7-676f38e03a78","Type":"ContainerStarted","Data":"c2f689db8a6037f06a457543d608db0e4e906aeaae33154865a39b770f6a3c6c"} Jan 26 11:15:22 crc kubenswrapper[4619]: I0126 11:15:22.961670 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aedaf21e-3f5f-44ab-a83f-b12671950ae2","Type":"ContainerStarted","Data":"94e021bcfce5e8b7bdd55d12aae8801a3e4deaf52d6b8502f69ff19ec5cec39b"} Jan 26 11:15:22 crc kubenswrapper[4619]: I0126 11:15:22.966571 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d93c669-300a-4954-b044-df49960ba3f0-config-data\") pod \"nova-cell1-conductor-db-sync-x6sj5\" (UID: \"8d93c669-300a-4954-b044-df49960ba3f0\") " pod="openstack/nova-cell1-conductor-db-sync-x6sj5" Jan 26 11:15:22 crc kubenswrapper[4619]: I0126 11:15:22.966660 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js7lf\" (UniqueName: \"kubernetes.io/projected/8d93c669-300a-4954-b044-df49960ba3f0-kube-api-access-js7lf\") pod \"nova-cell1-conductor-db-sync-x6sj5\" (UID: \"8d93c669-300a-4954-b044-df49960ba3f0\") " pod="openstack/nova-cell1-conductor-db-sync-x6sj5" Jan 26 11:15:22 crc kubenswrapper[4619]: I0126 11:15:22.966691 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d93c669-300a-4954-b044-df49960ba3f0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-x6sj5\" (UID: \"8d93c669-300a-4954-b044-df49960ba3f0\") " pod="openstack/nova-cell1-conductor-db-sync-x6sj5" Jan 26 11:15:22 crc kubenswrapper[4619]: I0126 11:15:22.966716 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d93c669-300a-4954-b044-df49960ba3f0-scripts\") pod \"nova-cell1-conductor-db-sync-x6sj5\" (UID: \"8d93c669-300a-4954-b044-df49960ba3f0\") " pod="openstack/nova-cell1-conductor-db-sync-x6sj5" Jan 26 11:15:22 crc kubenswrapper[4619]: I0126 11:15:22.978487 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d93c669-300a-4954-b044-df49960ba3f0-config-data\") pod \"nova-cell1-conductor-db-sync-x6sj5\" (UID: \"8d93c669-300a-4954-b044-df49960ba3f0\") " pod="openstack/nova-cell1-conductor-db-sync-x6sj5" Jan 26 11:15:22 crc kubenswrapper[4619]: I0126 11:15:22.978836 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" event={"ID":"6260663d-37a2-4439-9db3-d1b74606db09","Type":"ContainerStarted","Data":"4e92dea6eb8b6739e7022ed5647ad9870f06a3ebe4b3280e2d612b1d759ba939"} Jan 26 11:15:22 crc kubenswrapper[4619]: I0126 11:15:22.978871 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" event={"ID":"6260663d-37a2-4439-9db3-d1b74606db09","Type":"ContainerStarted","Data":"662a072a4a27c834b45e6af78fe0f70bbf8c9471d747cafde33a3930e805bca9"} Jan 26 11:15:22 crc kubenswrapper[4619]: I0126 11:15:22.985143 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d93c669-300a-4954-b044-df49960ba3f0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-x6sj5\" (UID: \"8d93c669-300a-4954-b044-df49960ba3f0\") " pod="openstack/nova-cell1-conductor-db-sync-x6sj5" Jan 26 11:15:22 crc kubenswrapper[4619]: I0126 11:15:22.986430 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d93c669-300a-4954-b044-df49960ba3f0-scripts\") pod \"nova-cell1-conductor-db-sync-x6sj5\" (UID: \"8d93c669-300a-4954-b044-df49960ba3f0\") " pod="openstack/nova-cell1-conductor-db-sync-x6sj5" Jan 26 11:15:22 crc kubenswrapper[4619]: I0126 11:15:22.987326 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-s6glk" podStartSLOduration=2.987307297 podStartE2EDuration="2.987307297s" podCreationTimestamp="2026-01-26 11:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:15:22.972164337 +0000 UTC m=+1222.006205053" watchObservedRunningTime="2026-01-26 11:15:22.987307297 +0000 UTC m=+1222.021348013" Jan 26 11:15:23 crc kubenswrapper[4619]: I0126 11:15:23.001500 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js7lf\" (UniqueName: \"kubernetes.io/projected/8d93c669-300a-4954-b044-df49960ba3f0-kube-api-access-js7lf\") pod \"nova-cell1-conductor-db-sync-x6sj5\" (UID: \"8d93c669-300a-4954-b044-df49960ba3f0\") " pod="openstack/nova-cell1-conductor-db-sync-x6sj5" Jan 26 11:15:23 crc kubenswrapper[4619]: I0126 11:15:23.110404 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-x6sj5" Jan 26 11:15:23 crc kubenswrapper[4619]: I0126 11:15:23.273971 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="670c0ff7-8d41-4dc2-81d7-b64d24b11d3d" path="/var/lib/kubelet/pods/670c0ff7-8d41-4dc2-81d7-b64d24b11d3d/volumes" Jan 26 11:15:23 crc kubenswrapper[4619]: I0126 11:15:23.738639 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-x6sj5"] Jan 26 11:15:24 crc kubenswrapper[4619]: I0126 11:15:24.009317 4619 generic.go:334] "Generic (PLEG): container finished" podID="6260663d-37a2-4439-9db3-d1b74606db09" containerID="4e92dea6eb8b6739e7022ed5647ad9870f06a3ebe4b3280e2d612b1d759ba939" exitCode=0 Jan 26 11:15:24 crc kubenswrapper[4619]: I0126 11:15:24.009381 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" event={"ID":"6260663d-37a2-4439-9db3-d1b74606db09","Type":"ContainerDied","Data":"4e92dea6eb8b6739e7022ed5647ad9870f06a3ebe4b3280e2d612b1d759ba939"} Jan 26 11:15:24 crc kubenswrapper[4619]: I0126 11:15:24.010121 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" Jan 26 11:15:24 crc kubenswrapper[4619]: I0126 11:15:24.010200 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" event={"ID":"6260663d-37a2-4439-9db3-d1b74606db09","Type":"ContainerStarted","Data":"d7c41d08f0808fe850195c144ddbd6902c46168c9cb71400eee4c7b842c77240"} Jan 26 11:15:24 crc kubenswrapper[4619]: I0126 11:15:24.014531 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-x6sj5" event={"ID":"8d93c669-300a-4954-b044-df49960ba3f0","Type":"ContainerStarted","Data":"1c933ad62b0d3780c4979dd78535cfe6b5e5496046ed596e9fe9794a38a8e0c5"} Jan 26 11:15:24 crc kubenswrapper[4619]: I0126 11:15:24.036118 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" podStartSLOduration=3.03609579 podStartE2EDuration="3.03609579s" podCreationTimestamp="2026-01-26 11:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:15:24.026269148 +0000 UTC m=+1223.060309864" watchObservedRunningTime="2026-01-26 11:15:24.03609579 +0000 UTC m=+1223.070136506" Jan 26 11:15:24 crc kubenswrapper[4619]: I0126 11:15:24.985586 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:15:25 crc kubenswrapper[4619]: I0126 11:15:25.004021 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 11:15:25 crc kubenswrapper[4619]: I0126 11:15:25.029473 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-x6sj5" event={"ID":"8d93c669-300a-4954-b044-df49960ba3f0","Type":"ContainerStarted","Data":"a5be8eb5920d8eb878a8fb03dc681fd467edd376b013b6859ce64b040dad4b0d"} Jan 26 11:15:25 crc kubenswrapper[4619]: I0126 11:15:25.047873 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-x6sj5" podStartSLOduration=3.047855718 podStartE2EDuration="3.047855718s" podCreationTimestamp="2026-01-26 11:15:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:15:25.044759632 +0000 UTC m=+1224.078800348" watchObservedRunningTime="2026-01-26 11:15:25.047855718 +0000 UTC m=+1224.081896434" Jan 26 11:15:27 crc kubenswrapper[4619]: I0126 11:15:27.048160 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"88b8c21e-24e1-4aeb-9da6-629bf3336e35","Type":"ContainerStarted","Data":"537bf1847d145967018fea2b4378312d5fa6c0e6af7efaf5775067b7d95a5f8c"} Jan 26 11:15:27 crc kubenswrapper[4619]: I0126 11:15:27.050687 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aedaf21e-3f5f-44ab-a83f-b12671950ae2","Type":"ContainerStarted","Data":"7ac33965ec3c05b6153934f43b012a79368e65283edae1dd1bd9a04c689b5d1d"} Jan 26 11:15:27 crc kubenswrapper[4619]: I0126 11:15:27.050713 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aedaf21e-3f5f-44ab-a83f-b12671950ae2","Type":"ContainerStarted","Data":"d3a3e7791d2a81b5a78ff54ff5e95ab3504defe7fcb5909bea3a565b5e85afdf"} Jan 26 11:15:27 crc kubenswrapper[4619]: I0126 11:15:27.052655 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"278e84bc-0704-4e94-af43-82b629145b59","Type":"ContainerStarted","Data":"e7c38eebc5102b8beaa684db8fe736ac93049a43d9b8458ccf311ed8662af939"} Jan 26 11:15:27 crc kubenswrapper[4619]: I0126 11:15:27.052709 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="278e84bc-0704-4e94-af43-82b629145b59" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://e7c38eebc5102b8beaa684db8fe736ac93049a43d9b8458ccf311ed8662af939" gracePeriod=30 Jan 26 11:15:27 crc kubenswrapper[4619]: I0126 11:15:27.055476 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f44896e9-4021-4f5d-90be-c7b4680bb230","Type":"ContainerStarted","Data":"683abfba3c2aacdf763304c5d32300e900ba7bde55f3e18d2488e28547742321"} Jan 26 11:15:27 crc kubenswrapper[4619]: I0126 11:15:27.055505 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f44896e9-4021-4f5d-90be-c7b4680bb230","Type":"ContainerStarted","Data":"168091dd0439ec9d98cfa20209ef6344168cbf36ff854d7c01816d75494a5aa0"} Jan 26 11:15:27 crc kubenswrapper[4619]: I0126 11:15:27.055580 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f44896e9-4021-4f5d-90be-c7b4680bb230" containerName="nova-metadata-log" containerID="cri-o://168091dd0439ec9d98cfa20209ef6344168cbf36ff854d7c01816d75494a5aa0" gracePeriod=30 Jan 26 11:15:27 crc kubenswrapper[4619]: I0126 11:15:27.055680 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f44896e9-4021-4f5d-90be-c7b4680bb230" containerName="nova-metadata-metadata" containerID="cri-o://683abfba3c2aacdf763304c5d32300e900ba7bde55f3e18d2488e28547742321" gracePeriod=30 Jan 26 11:15:27 crc kubenswrapper[4619]: I0126 11:15:27.070254 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.05024346 podStartE2EDuration="6.070236692s" podCreationTimestamp="2026-01-26 11:15:21 +0000 UTC" firstStartedPulling="2026-01-26 11:15:22.148271944 +0000 UTC m=+1221.182312660" lastFinishedPulling="2026-01-26 11:15:26.168265166 +0000 UTC m=+1225.202305892" observedRunningTime="2026-01-26 11:15:27.061569992 +0000 UTC m=+1226.095610708" watchObservedRunningTime="2026-01-26 11:15:27.070236692 +0000 UTC m=+1226.104277398" Jan 26 11:15:27 crc kubenswrapper[4619]: I0126 11:15:27.103406 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.364919647 podStartE2EDuration="6.10338532s" podCreationTimestamp="2026-01-26 11:15:21 +0000 UTC" firstStartedPulling="2026-01-26 11:15:22.450601929 +0000 UTC m=+1221.484642645" lastFinishedPulling="2026-01-26 11:15:26.189067592 +0000 UTC m=+1225.223108318" observedRunningTime="2026-01-26 11:15:27.088882588 +0000 UTC m=+1226.122923304" watchObservedRunningTime="2026-01-26 11:15:27.10338532 +0000 UTC m=+1226.137426036" Jan 26 11:15:27 crc kubenswrapper[4619]: I0126 11:15:27.139528 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.567439437 podStartE2EDuration="6.13950695s" podCreationTimestamp="2026-01-26 11:15:21 +0000 UTC" firstStartedPulling="2026-01-26 11:15:22.597756456 +0000 UTC m=+1221.631797172" lastFinishedPulling="2026-01-26 11:15:26.169823979 +0000 UTC m=+1225.203864685" observedRunningTime="2026-01-26 11:15:27.127659582 +0000 UTC m=+1226.161700298" watchObservedRunningTime="2026-01-26 11:15:27.13950695 +0000 UTC m=+1226.173547676" Jan 26 11:15:27 crc kubenswrapper[4619]: I0126 11:15:27.149556 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.196284876 podStartE2EDuration="7.149535109s" podCreationTimestamp="2026-01-26 11:15:20 +0000 UTC" firstStartedPulling="2026-01-26 11:15:22.215608579 +0000 UTC m=+1221.249649295" lastFinishedPulling="2026-01-26 11:15:26.168858802 +0000 UTC m=+1225.202899528" observedRunningTime="2026-01-26 11:15:27.146246167 +0000 UTC m=+1226.180286883" watchObservedRunningTime="2026-01-26 11:15:27.149535109 +0000 UTC m=+1226.183575825" Jan 26 11:15:27 crc kubenswrapper[4619]: I0126 11:15:27.998989 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.069951 4619 generic.go:334] "Generic (PLEG): container finished" podID="f44896e9-4021-4f5d-90be-c7b4680bb230" containerID="683abfba3c2aacdf763304c5d32300e900ba7bde55f3e18d2488e28547742321" exitCode=0 Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.070010 4619 generic.go:334] "Generic (PLEG): container finished" podID="f44896e9-4021-4f5d-90be-c7b4680bb230" containerID="168091dd0439ec9d98cfa20209ef6344168cbf36ff854d7c01816d75494a5aa0" exitCode=143 Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.071291 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.071678 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f44896e9-4021-4f5d-90be-c7b4680bb230","Type":"ContainerDied","Data":"683abfba3c2aacdf763304c5d32300e900ba7bde55f3e18d2488e28547742321"} Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.071748 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f44896e9-4021-4f5d-90be-c7b4680bb230","Type":"ContainerDied","Data":"168091dd0439ec9d98cfa20209ef6344168cbf36ff854d7c01816d75494a5aa0"} Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.071765 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f44896e9-4021-4f5d-90be-c7b4680bb230","Type":"ContainerDied","Data":"3e50d2d98f33590d3afc78c4c0c2081c1462eae857147e17e4e45781b0ab246e"} Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.071791 4619 scope.go:117] "RemoveContainer" containerID="683abfba3c2aacdf763304c5d32300e900ba7bde55f3e18d2488e28547742321" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.088418 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f44896e9-4021-4f5d-90be-c7b4680bb230-logs\") pod \"f44896e9-4021-4f5d-90be-c7b4680bb230\" (UID: \"f44896e9-4021-4f5d-90be-c7b4680bb230\") " Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.088565 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4r95\" (UniqueName: \"kubernetes.io/projected/f44896e9-4021-4f5d-90be-c7b4680bb230-kube-api-access-f4r95\") pod \"f44896e9-4021-4f5d-90be-c7b4680bb230\" (UID: \"f44896e9-4021-4f5d-90be-c7b4680bb230\") " Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.088604 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f44896e9-4021-4f5d-90be-c7b4680bb230-config-data\") pod \"f44896e9-4021-4f5d-90be-c7b4680bb230\" (UID: \"f44896e9-4021-4f5d-90be-c7b4680bb230\") " Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.088755 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f44896e9-4021-4f5d-90be-c7b4680bb230-combined-ca-bundle\") pod \"f44896e9-4021-4f5d-90be-c7b4680bb230\" (UID: \"f44896e9-4021-4f5d-90be-c7b4680bb230\") " Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.090428 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f44896e9-4021-4f5d-90be-c7b4680bb230-logs" (OuterVolumeSpecName: "logs") pod "f44896e9-4021-4f5d-90be-c7b4680bb230" (UID: "f44896e9-4021-4f5d-90be-c7b4680bb230"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.094237 4619 scope.go:117] "RemoveContainer" containerID="168091dd0439ec9d98cfa20209ef6344168cbf36ff854d7c01816d75494a5aa0" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.107832 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f44896e9-4021-4f5d-90be-c7b4680bb230-kube-api-access-f4r95" (OuterVolumeSpecName: "kube-api-access-f4r95") pod "f44896e9-4021-4f5d-90be-c7b4680bb230" (UID: "f44896e9-4021-4f5d-90be-c7b4680bb230"). InnerVolumeSpecName "kube-api-access-f4r95". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.122471 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f44896e9-4021-4f5d-90be-c7b4680bb230-config-data" (OuterVolumeSpecName: "config-data") pod "f44896e9-4021-4f5d-90be-c7b4680bb230" (UID: "f44896e9-4021-4f5d-90be-c7b4680bb230"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.135888 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f44896e9-4021-4f5d-90be-c7b4680bb230-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f44896e9-4021-4f5d-90be-c7b4680bb230" (UID: "f44896e9-4021-4f5d-90be-c7b4680bb230"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.153016 4619 scope.go:117] "RemoveContainer" containerID="683abfba3c2aacdf763304c5d32300e900ba7bde55f3e18d2488e28547742321" Jan 26 11:15:28 crc kubenswrapper[4619]: E0126 11:15:28.153411 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"683abfba3c2aacdf763304c5d32300e900ba7bde55f3e18d2488e28547742321\": container with ID starting with 683abfba3c2aacdf763304c5d32300e900ba7bde55f3e18d2488e28547742321 not found: ID does not exist" containerID="683abfba3c2aacdf763304c5d32300e900ba7bde55f3e18d2488e28547742321" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.153440 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"683abfba3c2aacdf763304c5d32300e900ba7bde55f3e18d2488e28547742321"} err="failed to get container status \"683abfba3c2aacdf763304c5d32300e900ba7bde55f3e18d2488e28547742321\": rpc error: code = NotFound desc = could not find container \"683abfba3c2aacdf763304c5d32300e900ba7bde55f3e18d2488e28547742321\": container with ID starting with 683abfba3c2aacdf763304c5d32300e900ba7bde55f3e18d2488e28547742321 not found: ID does not exist" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.153459 4619 scope.go:117] "RemoveContainer" containerID="168091dd0439ec9d98cfa20209ef6344168cbf36ff854d7c01816d75494a5aa0" Jan 26 11:15:28 crc kubenswrapper[4619]: E0126 11:15:28.153815 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"168091dd0439ec9d98cfa20209ef6344168cbf36ff854d7c01816d75494a5aa0\": container with ID starting with 168091dd0439ec9d98cfa20209ef6344168cbf36ff854d7c01816d75494a5aa0 not found: ID does not exist" containerID="168091dd0439ec9d98cfa20209ef6344168cbf36ff854d7c01816d75494a5aa0" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.153834 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"168091dd0439ec9d98cfa20209ef6344168cbf36ff854d7c01816d75494a5aa0"} err="failed to get container status \"168091dd0439ec9d98cfa20209ef6344168cbf36ff854d7c01816d75494a5aa0\": rpc error: code = NotFound desc = could not find container \"168091dd0439ec9d98cfa20209ef6344168cbf36ff854d7c01816d75494a5aa0\": container with ID starting with 168091dd0439ec9d98cfa20209ef6344168cbf36ff854d7c01816d75494a5aa0 not found: ID does not exist" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.153846 4619 scope.go:117] "RemoveContainer" containerID="683abfba3c2aacdf763304c5d32300e900ba7bde55f3e18d2488e28547742321" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.154372 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"683abfba3c2aacdf763304c5d32300e900ba7bde55f3e18d2488e28547742321"} err="failed to get container status \"683abfba3c2aacdf763304c5d32300e900ba7bde55f3e18d2488e28547742321\": rpc error: code = NotFound desc = could not find container \"683abfba3c2aacdf763304c5d32300e900ba7bde55f3e18d2488e28547742321\": container with ID starting with 683abfba3c2aacdf763304c5d32300e900ba7bde55f3e18d2488e28547742321 not found: ID does not exist" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.154391 4619 scope.go:117] "RemoveContainer" containerID="168091dd0439ec9d98cfa20209ef6344168cbf36ff854d7c01816d75494a5aa0" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.154563 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"168091dd0439ec9d98cfa20209ef6344168cbf36ff854d7c01816d75494a5aa0"} err="failed to get container status \"168091dd0439ec9d98cfa20209ef6344168cbf36ff854d7c01816d75494a5aa0\": rpc error: code = NotFound desc = could not find container \"168091dd0439ec9d98cfa20209ef6344168cbf36ff854d7c01816d75494a5aa0\": container with ID starting with 168091dd0439ec9d98cfa20209ef6344168cbf36ff854d7c01816d75494a5aa0 not found: ID does not exist" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.220189 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f44896e9-4021-4f5d-90be-c7b4680bb230-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.220226 4619 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f44896e9-4021-4f5d-90be-c7b4680bb230-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.220237 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4r95\" (UniqueName: \"kubernetes.io/projected/f44896e9-4021-4f5d-90be-c7b4680bb230-kube-api-access-f4r95\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.220248 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f44896e9-4021-4f5d-90be-c7b4680bb230-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.406197 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.418711 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.432810 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:15:28 crc kubenswrapper[4619]: E0126 11:15:28.433254 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f44896e9-4021-4f5d-90be-c7b4680bb230" containerName="nova-metadata-metadata" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.433276 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="f44896e9-4021-4f5d-90be-c7b4680bb230" containerName="nova-metadata-metadata" Jan 26 11:15:28 crc kubenswrapper[4619]: E0126 11:15:28.433319 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f44896e9-4021-4f5d-90be-c7b4680bb230" containerName="nova-metadata-log" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.433327 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="f44896e9-4021-4f5d-90be-c7b4680bb230" containerName="nova-metadata-log" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.433540 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="f44896e9-4021-4f5d-90be-c7b4680bb230" containerName="nova-metadata-metadata" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.433559 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="f44896e9-4021-4f5d-90be-c7b4680bb230" containerName="nova-metadata-log" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.435188 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.438262 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.438673 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.448943 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.526123 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jrqk\" (UniqueName: \"kubernetes.io/projected/1cb987a0-5d4e-4ace-9e20-3e65194737b6-kube-api-access-9jrqk\") pod \"nova-metadata-0\" (UID: \"1cb987a0-5d4e-4ace-9e20-3e65194737b6\") " pod="openstack/nova-metadata-0" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.526450 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cb987a0-5d4e-4ace-9e20-3e65194737b6-config-data\") pod \"nova-metadata-0\" (UID: \"1cb987a0-5d4e-4ace-9e20-3e65194737b6\") " pod="openstack/nova-metadata-0" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.526479 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb987a0-5d4e-4ace-9e20-3e65194737b6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1cb987a0-5d4e-4ace-9e20-3e65194737b6\") " pod="openstack/nova-metadata-0" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.526497 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cb987a0-5d4e-4ace-9e20-3e65194737b6-logs\") pod \"nova-metadata-0\" (UID: \"1cb987a0-5d4e-4ace-9e20-3e65194737b6\") " pod="openstack/nova-metadata-0" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.526547 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cb987a0-5d4e-4ace-9e20-3e65194737b6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1cb987a0-5d4e-4ace-9e20-3e65194737b6\") " pod="openstack/nova-metadata-0" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.628409 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jrqk\" (UniqueName: \"kubernetes.io/projected/1cb987a0-5d4e-4ace-9e20-3e65194737b6-kube-api-access-9jrqk\") pod \"nova-metadata-0\" (UID: \"1cb987a0-5d4e-4ace-9e20-3e65194737b6\") " pod="openstack/nova-metadata-0" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.628691 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cb987a0-5d4e-4ace-9e20-3e65194737b6-config-data\") pod \"nova-metadata-0\" (UID: \"1cb987a0-5d4e-4ace-9e20-3e65194737b6\") " pod="openstack/nova-metadata-0" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.629565 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb987a0-5d4e-4ace-9e20-3e65194737b6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1cb987a0-5d4e-4ace-9e20-3e65194737b6\") " pod="openstack/nova-metadata-0" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.629690 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cb987a0-5d4e-4ace-9e20-3e65194737b6-logs\") pod \"nova-metadata-0\" (UID: \"1cb987a0-5d4e-4ace-9e20-3e65194737b6\") " pod="openstack/nova-metadata-0" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.629887 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cb987a0-5d4e-4ace-9e20-3e65194737b6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1cb987a0-5d4e-4ace-9e20-3e65194737b6\") " pod="openstack/nova-metadata-0" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.630320 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cb987a0-5d4e-4ace-9e20-3e65194737b6-logs\") pod \"nova-metadata-0\" (UID: \"1cb987a0-5d4e-4ace-9e20-3e65194737b6\") " pod="openstack/nova-metadata-0" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.641097 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cb987a0-5d4e-4ace-9e20-3e65194737b6-config-data\") pod \"nova-metadata-0\" (UID: \"1cb987a0-5d4e-4ace-9e20-3e65194737b6\") " pod="openstack/nova-metadata-0" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.644449 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb987a0-5d4e-4ace-9e20-3e65194737b6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1cb987a0-5d4e-4ace-9e20-3e65194737b6\") " pod="openstack/nova-metadata-0" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.646932 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jrqk\" (UniqueName: \"kubernetes.io/projected/1cb987a0-5d4e-4ace-9e20-3e65194737b6-kube-api-access-9jrqk\") pod \"nova-metadata-0\" (UID: \"1cb987a0-5d4e-4ace-9e20-3e65194737b6\") " pod="openstack/nova-metadata-0" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.647532 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cb987a0-5d4e-4ace-9e20-3e65194737b6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1cb987a0-5d4e-4ace-9e20-3e65194737b6\") " pod="openstack/nova-metadata-0" Jan 26 11:15:28 crc kubenswrapper[4619]: I0126 11:15:28.769033 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 11:15:29 crc kubenswrapper[4619]: I0126 11:15:29.239031 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:15:29 crc kubenswrapper[4619]: I0126 11:15:29.270823 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f44896e9-4021-4f5d-90be-c7b4680bb230" path="/var/lib/kubelet/pods/f44896e9-4021-4f5d-90be-c7b4680bb230/volumes" Jan 26 11:15:30 crc kubenswrapper[4619]: I0126 11:15:30.091039 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1cb987a0-5d4e-4ace-9e20-3e65194737b6","Type":"ContainerStarted","Data":"6d397f7feaa7bbcba4058926eae83063df8a3f445d7a44d834c5f3c2cf6ac468"} Jan 26 11:15:30 crc kubenswrapper[4619]: I0126 11:15:30.091550 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1cb987a0-5d4e-4ace-9e20-3e65194737b6","Type":"ContainerStarted","Data":"67b1887be548fd70dd204d0a2ada4ca8cbb170056d348c7236c3bcf4126be088"} Jan 26 11:15:30 crc kubenswrapper[4619]: I0126 11:15:30.091571 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1cb987a0-5d4e-4ace-9e20-3e65194737b6","Type":"ContainerStarted","Data":"14530117363e93ae40c357604f64cac8a5ad3d8dd7cfe71cbc6d888c3b112b26"} Jan 26 11:15:31 crc kubenswrapper[4619]: I0126 11:15:31.023479 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="7067fd1d-9a57-41ce-9dae-c4d7b143ff53" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 26 11:15:31 crc kubenswrapper[4619]: I0126 11:15:31.319800 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 11:15:31 crc kubenswrapper[4619]: I0126 11:15:31.319853 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 11:15:31 crc kubenswrapper[4619]: I0126 11:15:31.560488 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 11:15:31 crc kubenswrapper[4619]: I0126 11:15:31.560536 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 11:15:31 crc kubenswrapper[4619]: I0126 11:15:31.600500 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 11:15:31 crc kubenswrapper[4619]: I0126 11:15:31.627842 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.627818016 podStartE2EDuration="3.627818016s" podCreationTimestamp="2026-01-26 11:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:15:30.150316907 +0000 UTC m=+1229.184357643" watchObservedRunningTime="2026-01-26 11:15:31.627818016 +0000 UTC m=+1230.661858742" Jan 26 11:15:31 crc kubenswrapper[4619]: I0126 11:15:31.733407 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:15:31 crc kubenswrapper[4619]: I0126 11:15:31.843184 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" Jan 26 11:15:31 crc kubenswrapper[4619]: I0126 11:15:31.914835 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-wkmcr"] Jan 26 11:15:31 crc kubenswrapper[4619]: I0126 11:15:31.915082 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-wkmcr" podUID="dae431ee-6510-4c51-b099-96092c0b5b6f" containerName="dnsmasq-dns" containerID="cri-o://28d76bd31ea7c1fdcd5db94005da64bf4f1e46c963c56d035ac6c6e1f1e11ed7" gracePeriod=10 Jan 26 11:15:32 crc kubenswrapper[4619]: I0126 11:15:32.130149 4619 generic.go:334] "Generic (PLEG): container finished" podID="6460284b-1cb6-444b-a2f7-676f38e03a78" containerID="570fc1861584d28033a84f8f3a28cfc3146ab376a6a71722dc1634536d4ecd74" exitCode=0 Jan 26 11:15:32 crc kubenswrapper[4619]: I0126 11:15:32.130408 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-s6glk" event={"ID":"6460284b-1cb6-444b-a2f7-676f38e03a78","Type":"ContainerDied","Data":"570fc1861584d28033a84f8f3a28cfc3146ab376a6a71722dc1634536d4ecd74"} Jan 26 11:15:32 crc kubenswrapper[4619]: I0126 11:15:32.144220 4619 generic.go:334] "Generic (PLEG): container finished" podID="dae431ee-6510-4c51-b099-96092c0b5b6f" containerID="28d76bd31ea7c1fdcd5db94005da64bf4f1e46c963c56d035ac6c6e1f1e11ed7" exitCode=0 Jan 26 11:15:32 crc kubenswrapper[4619]: I0126 11:15:32.144310 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-wkmcr" event={"ID":"dae431ee-6510-4c51-b099-96092c0b5b6f","Type":"ContainerDied","Data":"28d76bd31ea7c1fdcd5db94005da64bf4f1e46c963c56d035ac6c6e1f1e11ed7"} Jan 26 11:15:32 crc kubenswrapper[4619]: E0126 11:15:32.151078 4619 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddae431ee_6510_4c51_b099_96092c0b5b6f.slice/crio-conmon-28d76bd31ea7c1fdcd5db94005da64bf4f1e46c963c56d035ac6c6e1f1e11ed7.scope\": RecentStats: unable to find data in memory cache]" Jan 26 11:15:32 crc kubenswrapper[4619]: I0126 11:15:32.164536 4619 generic.go:334] "Generic (PLEG): container finished" podID="8d93c669-300a-4954-b044-df49960ba3f0" containerID="a5be8eb5920d8eb878a8fb03dc681fd467edd376b013b6859ce64b040dad4b0d" exitCode=0 Jan 26 11:15:32 crc kubenswrapper[4619]: I0126 11:15:32.164646 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-x6sj5" event={"ID":"8d93c669-300a-4954-b044-df49960ba3f0","Type":"ContainerDied","Data":"a5be8eb5920d8eb878a8fb03dc681fd467edd376b013b6859ce64b040dad4b0d"} Jan 26 11:15:32 crc kubenswrapper[4619]: I0126 11:15:32.211182 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 11:15:32 crc kubenswrapper[4619]: I0126 11:15:32.410985 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="aedaf21e-3f5f-44ab-a83f-b12671950ae2" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 11:15:32 crc kubenswrapper[4619]: I0126 11:15:32.411263 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="aedaf21e-3f5f-44ab-a83f-b12671950ae2" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 11:15:32 crc kubenswrapper[4619]: I0126 11:15:32.487656 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-wkmcr" Jan 26 11:15:32 crc kubenswrapper[4619]: I0126 11:15:32.641362 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dae431ee-6510-4c51-b099-96092c0b5b6f-ovsdbserver-sb\") pod \"dae431ee-6510-4c51-b099-96092c0b5b6f\" (UID: \"dae431ee-6510-4c51-b099-96092c0b5b6f\") " Jan 26 11:15:32 crc kubenswrapper[4619]: I0126 11:15:32.641398 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dae431ee-6510-4c51-b099-96092c0b5b6f-dns-swift-storage-0\") pod \"dae431ee-6510-4c51-b099-96092c0b5b6f\" (UID: \"dae431ee-6510-4c51-b099-96092c0b5b6f\") " Jan 26 11:15:32 crc kubenswrapper[4619]: I0126 11:15:32.641499 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dae431ee-6510-4c51-b099-96092c0b5b6f-config\") pod \"dae431ee-6510-4c51-b099-96092c0b5b6f\" (UID: \"dae431ee-6510-4c51-b099-96092c0b5b6f\") " Jan 26 11:15:32 crc kubenswrapper[4619]: I0126 11:15:32.641535 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dae431ee-6510-4c51-b099-96092c0b5b6f-dns-svc\") pod \"dae431ee-6510-4c51-b099-96092c0b5b6f\" (UID: \"dae431ee-6510-4c51-b099-96092c0b5b6f\") " Jan 26 11:15:32 crc kubenswrapper[4619]: I0126 11:15:32.641669 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dae431ee-6510-4c51-b099-96092c0b5b6f-ovsdbserver-nb\") pod \"dae431ee-6510-4c51-b099-96092c0b5b6f\" (UID: \"dae431ee-6510-4c51-b099-96092c0b5b6f\") " Jan 26 11:15:32 crc kubenswrapper[4619]: I0126 11:15:32.641725 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhp6t\" (UniqueName: \"kubernetes.io/projected/dae431ee-6510-4c51-b099-96092c0b5b6f-kube-api-access-xhp6t\") pod \"dae431ee-6510-4c51-b099-96092c0b5b6f\" (UID: \"dae431ee-6510-4c51-b099-96092c0b5b6f\") " Jan 26 11:15:32 crc kubenswrapper[4619]: I0126 11:15:32.667829 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dae431ee-6510-4c51-b099-96092c0b5b6f-kube-api-access-xhp6t" (OuterVolumeSpecName: "kube-api-access-xhp6t") pod "dae431ee-6510-4c51-b099-96092c0b5b6f" (UID: "dae431ee-6510-4c51-b099-96092c0b5b6f"). InnerVolumeSpecName "kube-api-access-xhp6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:15:32 crc kubenswrapper[4619]: I0126 11:15:32.704604 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dae431ee-6510-4c51-b099-96092c0b5b6f-config" (OuterVolumeSpecName: "config") pod "dae431ee-6510-4c51-b099-96092c0b5b6f" (UID: "dae431ee-6510-4c51-b099-96092c0b5b6f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:15:32 crc kubenswrapper[4619]: I0126 11:15:32.728181 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dae431ee-6510-4c51-b099-96092c0b5b6f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dae431ee-6510-4c51-b099-96092c0b5b6f" (UID: "dae431ee-6510-4c51-b099-96092c0b5b6f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:15:32 crc kubenswrapper[4619]: I0126 11:15:32.740586 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dae431ee-6510-4c51-b099-96092c0b5b6f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dae431ee-6510-4c51-b099-96092c0b5b6f" (UID: "dae431ee-6510-4c51-b099-96092c0b5b6f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:15:32 crc kubenswrapper[4619]: I0126 11:15:32.744400 4619 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dae431ee-6510-4c51-b099-96092c0b5b6f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:32 crc kubenswrapper[4619]: I0126 11:15:32.744437 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dae431ee-6510-4c51-b099-96092c0b5b6f-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:32 crc kubenswrapper[4619]: I0126 11:15:32.744450 4619 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dae431ee-6510-4c51-b099-96092c0b5b6f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:32 crc kubenswrapper[4619]: I0126 11:15:32.744462 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhp6t\" (UniqueName: \"kubernetes.io/projected/dae431ee-6510-4c51-b099-96092c0b5b6f-kube-api-access-xhp6t\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:32 crc kubenswrapper[4619]: I0126 11:15:32.757469 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dae431ee-6510-4c51-b099-96092c0b5b6f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dae431ee-6510-4c51-b099-96092c0b5b6f" (UID: "dae431ee-6510-4c51-b099-96092c0b5b6f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:15:32 crc kubenswrapper[4619]: I0126 11:15:32.770230 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dae431ee-6510-4c51-b099-96092c0b5b6f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dae431ee-6510-4c51-b099-96092c0b5b6f" (UID: "dae431ee-6510-4c51-b099-96092c0b5b6f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:15:32 crc kubenswrapper[4619]: I0126 11:15:32.845876 4619 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dae431ee-6510-4c51-b099-96092c0b5b6f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:32 crc kubenswrapper[4619]: I0126 11:15:32.845915 4619 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dae431ee-6510-4c51-b099-96092c0b5b6f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.174993 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-wkmcr" Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.185007 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-wkmcr" event={"ID":"dae431ee-6510-4c51-b099-96092c0b5b6f","Type":"ContainerDied","Data":"7a5e8885f21b7a3e89308cdf633599c9549a771ca8f748e5c2b019d7b7637789"} Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.185095 4619 scope.go:117] "RemoveContainer" containerID="28d76bd31ea7c1fdcd5db94005da64bf4f1e46c963c56d035ac6c6e1f1e11ed7" Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.219450 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-wkmcr"] Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.228124 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-wkmcr"] Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.240702 4619 scope.go:117] "RemoveContainer" containerID="fa7bc46e92a3aaadd08d72231e65445ef8a48645a5946c9a954c739a9cea691a" Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.287092 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dae431ee-6510-4c51-b099-96092c0b5b6f" path="/var/lib/kubelet/pods/dae431ee-6510-4c51-b099-96092c0b5b6f/volumes" Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.643825 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-x6sj5" Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.744732 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-s6glk" Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.766276 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js7lf\" (UniqueName: \"kubernetes.io/projected/8d93c669-300a-4954-b044-df49960ba3f0-kube-api-access-js7lf\") pod \"8d93c669-300a-4954-b044-df49960ba3f0\" (UID: \"8d93c669-300a-4954-b044-df49960ba3f0\") " Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.766387 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d93c669-300a-4954-b044-df49960ba3f0-config-data\") pod \"8d93c669-300a-4954-b044-df49960ba3f0\" (UID: \"8d93c669-300a-4954-b044-df49960ba3f0\") " Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.766425 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d93c669-300a-4954-b044-df49960ba3f0-scripts\") pod \"8d93c669-300a-4954-b044-df49960ba3f0\" (UID: \"8d93c669-300a-4954-b044-df49960ba3f0\") " Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.766444 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d93c669-300a-4954-b044-df49960ba3f0-combined-ca-bundle\") pod \"8d93c669-300a-4954-b044-df49960ba3f0\" (UID: \"8d93c669-300a-4954-b044-df49960ba3f0\") " Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.770187 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.770232 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.772420 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d93c669-300a-4954-b044-df49960ba3f0-scripts" (OuterVolumeSpecName: "scripts") pod "8d93c669-300a-4954-b044-df49960ba3f0" (UID: "8d93c669-300a-4954-b044-df49960ba3f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.772485 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d93c669-300a-4954-b044-df49960ba3f0-kube-api-access-js7lf" (OuterVolumeSpecName: "kube-api-access-js7lf") pod "8d93c669-300a-4954-b044-df49960ba3f0" (UID: "8d93c669-300a-4954-b044-df49960ba3f0"). InnerVolumeSpecName "kube-api-access-js7lf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.792177 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d93c669-300a-4954-b044-df49960ba3f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d93c669-300a-4954-b044-df49960ba3f0" (UID: "8d93c669-300a-4954-b044-df49960ba3f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.803289 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d93c669-300a-4954-b044-df49960ba3f0-config-data" (OuterVolumeSpecName: "config-data") pod "8d93c669-300a-4954-b044-df49960ba3f0" (UID: "8d93c669-300a-4954-b044-df49960ba3f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.868448 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5785\" (UniqueName: \"kubernetes.io/projected/6460284b-1cb6-444b-a2f7-676f38e03a78-kube-api-access-t5785\") pod \"6460284b-1cb6-444b-a2f7-676f38e03a78\" (UID: \"6460284b-1cb6-444b-a2f7-676f38e03a78\") " Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.868550 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6460284b-1cb6-444b-a2f7-676f38e03a78-scripts\") pod \"6460284b-1cb6-444b-a2f7-676f38e03a78\" (UID: \"6460284b-1cb6-444b-a2f7-676f38e03a78\") " Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.868609 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6460284b-1cb6-444b-a2f7-676f38e03a78-combined-ca-bundle\") pod \"6460284b-1cb6-444b-a2f7-676f38e03a78\" (UID: \"6460284b-1cb6-444b-a2f7-676f38e03a78\") " Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.868914 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6460284b-1cb6-444b-a2f7-676f38e03a78-config-data\") pod \"6460284b-1cb6-444b-a2f7-676f38e03a78\" (UID: \"6460284b-1cb6-444b-a2f7-676f38e03a78\") " Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.870339 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js7lf\" (UniqueName: \"kubernetes.io/projected/8d93c669-300a-4954-b044-df49960ba3f0-kube-api-access-js7lf\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.870357 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d93c669-300a-4954-b044-df49960ba3f0-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.870368 4619 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d93c669-300a-4954-b044-df49960ba3f0-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.870378 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d93c669-300a-4954-b044-df49960ba3f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.871929 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6460284b-1cb6-444b-a2f7-676f38e03a78-kube-api-access-t5785" (OuterVolumeSpecName: "kube-api-access-t5785") pod "6460284b-1cb6-444b-a2f7-676f38e03a78" (UID: "6460284b-1cb6-444b-a2f7-676f38e03a78"). InnerVolumeSpecName "kube-api-access-t5785". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.875578 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6460284b-1cb6-444b-a2f7-676f38e03a78-scripts" (OuterVolumeSpecName: "scripts") pod "6460284b-1cb6-444b-a2f7-676f38e03a78" (UID: "6460284b-1cb6-444b-a2f7-676f38e03a78"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.899118 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6460284b-1cb6-444b-a2f7-676f38e03a78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6460284b-1cb6-444b-a2f7-676f38e03a78" (UID: "6460284b-1cb6-444b-a2f7-676f38e03a78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.903863 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6460284b-1cb6-444b-a2f7-676f38e03a78-config-data" (OuterVolumeSpecName: "config-data") pod "6460284b-1cb6-444b-a2f7-676f38e03a78" (UID: "6460284b-1cb6-444b-a2f7-676f38e03a78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.972401 4619 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6460284b-1cb6-444b-a2f7-676f38e03a78-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.972445 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6460284b-1cb6-444b-a2f7-676f38e03a78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.972461 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6460284b-1cb6-444b-a2f7-676f38e03a78-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:33 crc kubenswrapper[4619]: I0126 11:15:33.972473 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5785\" (UniqueName: \"kubernetes.io/projected/6460284b-1cb6-444b-a2f7-676f38e03a78-kube-api-access-t5785\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.199804 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-x6sj5" event={"ID":"8d93c669-300a-4954-b044-df49960ba3f0","Type":"ContainerDied","Data":"1c933ad62b0d3780c4979dd78535cfe6b5e5496046ed596e9fe9794a38a8e0c5"} Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.199912 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c933ad62b0d3780c4979dd78535cfe6b5e5496046ed596e9fe9794a38a8e0c5" Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.199818 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-x6sj5" Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.202500 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-s6glk" event={"ID":"6460284b-1cb6-444b-a2f7-676f38e03a78","Type":"ContainerDied","Data":"c2f689db8a6037f06a457543d608db0e4e906aeaae33154865a39b770f6a3c6c"} Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.202532 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2f689db8a6037f06a457543d608db0e4e906aeaae33154865a39b770f6a3c6c" Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.202643 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-s6glk" Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.303888 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 11:15:34 crc kubenswrapper[4619]: E0126 11:15:34.304667 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae431ee-6510-4c51-b099-96092c0b5b6f" containerName="init" Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.304684 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae431ee-6510-4c51-b099-96092c0b5b6f" containerName="init" Jan 26 11:15:34 crc kubenswrapper[4619]: E0126 11:15:34.304698 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae431ee-6510-4c51-b099-96092c0b5b6f" containerName="dnsmasq-dns" Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.304706 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae431ee-6510-4c51-b099-96092c0b5b6f" containerName="dnsmasq-dns" Jan 26 11:15:34 crc kubenswrapper[4619]: E0126 11:15:34.304724 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d93c669-300a-4954-b044-df49960ba3f0" containerName="nova-cell1-conductor-db-sync" Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.304732 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d93c669-300a-4954-b044-df49960ba3f0" containerName="nova-cell1-conductor-db-sync" Jan 26 11:15:34 crc kubenswrapper[4619]: E0126 11:15:34.304745 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6460284b-1cb6-444b-a2f7-676f38e03a78" containerName="nova-manage" Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.304754 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="6460284b-1cb6-444b-a2f7-676f38e03a78" containerName="nova-manage" Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.304939 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="dae431ee-6510-4c51-b099-96092c0b5b6f" containerName="dnsmasq-dns" Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.304955 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="6460284b-1cb6-444b-a2f7-676f38e03a78" containerName="nova-manage" Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.304978 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d93c669-300a-4954-b044-df49960ba3f0" containerName="nova-cell1-conductor-db-sync" Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.305701 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.308418 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.334260 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.383356 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5h4b\" (UniqueName: \"kubernetes.io/projected/65f97f5d-163a-469b-b63e-f2763404b64c-kube-api-access-g5h4b\") pod \"nova-cell1-conductor-0\" (UID: \"65f97f5d-163a-469b-b63e-f2763404b64c\") " pod="openstack/nova-cell1-conductor-0" Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.383478 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65f97f5d-163a-469b-b63e-f2763404b64c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"65f97f5d-163a-469b-b63e-f2763404b64c\") " pod="openstack/nova-cell1-conductor-0" Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.383587 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65f97f5d-163a-469b-b63e-f2763404b64c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"65f97f5d-163a-469b-b63e-f2763404b64c\") " pod="openstack/nova-cell1-conductor-0" Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.429736 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.430008 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="aedaf21e-3f5f-44ab-a83f-b12671950ae2" containerName="nova-api-log" containerID="cri-o://d3a3e7791d2a81b5a78ff54ff5e95ab3504defe7fcb5909bea3a565b5e85afdf" gracePeriod=30 Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.430055 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="aedaf21e-3f5f-44ab-a83f-b12671950ae2" containerName="nova-api-api" containerID="cri-o://7ac33965ec3c05b6153934f43b012a79368e65283edae1dd1bd9a04c689b5d1d" gracePeriod=30 Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.447844 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.448060 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="88b8c21e-24e1-4aeb-9da6-629bf3336e35" containerName="nova-scheduler-scheduler" containerID="cri-o://537bf1847d145967018fea2b4378312d5fa6c0e6af7efaf5775067b7d95a5f8c" gracePeriod=30 Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.478162 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.478386 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1cb987a0-5d4e-4ace-9e20-3e65194737b6" containerName="nova-metadata-log" containerID="cri-o://67b1887be548fd70dd204d0a2ada4ca8cbb170056d348c7236c3bcf4126be088" gracePeriod=30 Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.478477 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1cb987a0-5d4e-4ace-9e20-3e65194737b6" containerName="nova-metadata-metadata" containerID="cri-o://6d397f7feaa7bbcba4058926eae83063df8a3f445d7a44d834c5f3c2cf6ac468" gracePeriod=30 Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.485226 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65f97f5d-163a-469b-b63e-f2763404b64c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"65f97f5d-163a-469b-b63e-f2763404b64c\") " pod="openstack/nova-cell1-conductor-0" Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.485332 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5h4b\" (UniqueName: \"kubernetes.io/projected/65f97f5d-163a-469b-b63e-f2763404b64c-kube-api-access-g5h4b\") pod \"nova-cell1-conductor-0\" (UID: \"65f97f5d-163a-469b-b63e-f2763404b64c\") " pod="openstack/nova-cell1-conductor-0" Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.485398 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65f97f5d-163a-469b-b63e-f2763404b64c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"65f97f5d-163a-469b-b63e-f2763404b64c\") " pod="openstack/nova-cell1-conductor-0" Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.492551 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65f97f5d-163a-469b-b63e-f2763404b64c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"65f97f5d-163a-469b-b63e-f2763404b64c\") " pod="openstack/nova-cell1-conductor-0" Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.509597 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65f97f5d-163a-469b-b63e-f2763404b64c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"65f97f5d-163a-469b-b63e-f2763404b64c\") " pod="openstack/nova-cell1-conductor-0" Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.522129 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5h4b\" (UniqueName: \"kubernetes.io/projected/65f97f5d-163a-469b-b63e-f2763404b64c-kube-api-access-g5h4b\") pod \"nova-cell1-conductor-0\" (UID: \"65f97f5d-163a-469b-b63e-f2763404b64c\") " pod="openstack/nova-cell1-conductor-0" Jan 26 11:15:34 crc kubenswrapper[4619]: I0126 11:15:34.624192 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 11:15:35 crc kubenswrapper[4619]: I0126 11:15:35.038917 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 11:15:35 crc kubenswrapper[4619]: I0126 11:15:35.245467 4619 generic.go:334] "Generic (PLEG): container finished" podID="aedaf21e-3f5f-44ab-a83f-b12671950ae2" containerID="d3a3e7791d2a81b5a78ff54ff5e95ab3504defe7fcb5909bea3a565b5e85afdf" exitCode=143 Jan 26 11:15:35 crc kubenswrapper[4619]: I0126 11:15:35.245580 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aedaf21e-3f5f-44ab-a83f-b12671950ae2","Type":"ContainerDied","Data":"d3a3e7791d2a81b5a78ff54ff5e95ab3504defe7fcb5909bea3a565b5e85afdf"} Jan 26 11:15:35 crc kubenswrapper[4619]: I0126 11:15:35.248138 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"65f97f5d-163a-469b-b63e-f2763404b64c","Type":"ContainerStarted","Data":"81f722f8262fdd35460d8f7b41a1658da1406905b899f3fa3ebc70b3680e9768"} Jan 26 11:15:35 crc kubenswrapper[4619]: I0126 11:15:35.252096 4619 generic.go:334] "Generic (PLEG): container finished" podID="1cb987a0-5d4e-4ace-9e20-3e65194737b6" containerID="6d397f7feaa7bbcba4058926eae83063df8a3f445d7a44d834c5f3c2cf6ac468" exitCode=0 Jan 26 11:15:35 crc kubenswrapper[4619]: I0126 11:15:35.252122 4619 generic.go:334] "Generic (PLEG): container finished" podID="1cb987a0-5d4e-4ace-9e20-3e65194737b6" containerID="67b1887be548fd70dd204d0a2ada4ca8cbb170056d348c7236c3bcf4126be088" exitCode=143 Jan 26 11:15:35 crc kubenswrapper[4619]: I0126 11:15:35.252142 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1cb987a0-5d4e-4ace-9e20-3e65194737b6","Type":"ContainerDied","Data":"6d397f7feaa7bbcba4058926eae83063df8a3f445d7a44d834c5f3c2cf6ac468"} Jan 26 11:15:35 crc kubenswrapper[4619]: I0126 11:15:35.252169 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1cb987a0-5d4e-4ace-9e20-3e65194737b6","Type":"ContainerDied","Data":"67b1887be548fd70dd204d0a2ada4ca8cbb170056d348c7236c3bcf4126be088"} Jan 26 11:15:35 crc kubenswrapper[4619]: I0126 11:15:35.376779 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 11:15:35 crc kubenswrapper[4619]: I0126 11:15:35.401288 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jrqk\" (UniqueName: \"kubernetes.io/projected/1cb987a0-5d4e-4ace-9e20-3e65194737b6-kube-api-access-9jrqk\") pod \"1cb987a0-5d4e-4ace-9e20-3e65194737b6\" (UID: \"1cb987a0-5d4e-4ace-9e20-3e65194737b6\") " Jan 26 11:15:35 crc kubenswrapper[4619]: I0126 11:15:35.401371 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cb987a0-5d4e-4ace-9e20-3e65194737b6-nova-metadata-tls-certs\") pod \"1cb987a0-5d4e-4ace-9e20-3e65194737b6\" (UID: \"1cb987a0-5d4e-4ace-9e20-3e65194737b6\") " Jan 26 11:15:35 crc kubenswrapper[4619]: I0126 11:15:35.401415 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cb987a0-5d4e-4ace-9e20-3e65194737b6-logs\") pod \"1cb987a0-5d4e-4ace-9e20-3e65194737b6\" (UID: \"1cb987a0-5d4e-4ace-9e20-3e65194737b6\") " Jan 26 11:15:35 crc kubenswrapper[4619]: I0126 11:15:35.401456 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb987a0-5d4e-4ace-9e20-3e65194737b6-combined-ca-bundle\") pod \"1cb987a0-5d4e-4ace-9e20-3e65194737b6\" (UID: \"1cb987a0-5d4e-4ace-9e20-3e65194737b6\") " Jan 26 11:15:35 crc kubenswrapper[4619]: I0126 11:15:35.401514 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cb987a0-5d4e-4ace-9e20-3e65194737b6-config-data\") pod \"1cb987a0-5d4e-4ace-9e20-3e65194737b6\" (UID: \"1cb987a0-5d4e-4ace-9e20-3e65194737b6\") " Jan 26 11:15:35 crc kubenswrapper[4619]: I0126 11:15:35.401980 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cb987a0-5d4e-4ace-9e20-3e65194737b6-logs" (OuterVolumeSpecName: "logs") pod "1cb987a0-5d4e-4ace-9e20-3e65194737b6" (UID: "1cb987a0-5d4e-4ace-9e20-3e65194737b6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:15:35 crc kubenswrapper[4619]: I0126 11:15:35.406549 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cb987a0-5d4e-4ace-9e20-3e65194737b6-kube-api-access-9jrqk" (OuterVolumeSpecName: "kube-api-access-9jrqk") pod "1cb987a0-5d4e-4ace-9e20-3e65194737b6" (UID: "1cb987a0-5d4e-4ace-9e20-3e65194737b6"). InnerVolumeSpecName "kube-api-access-9jrqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:15:35 crc kubenswrapper[4619]: I0126 11:15:35.447564 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cb987a0-5d4e-4ace-9e20-3e65194737b6-config-data" (OuterVolumeSpecName: "config-data") pod "1cb987a0-5d4e-4ace-9e20-3e65194737b6" (UID: "1cb987a0-5d4e-4ace-9e20-3e65194737b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:35 crc kubenswrapper[4619]: I0126 11:15:35.466199 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cb987a0-5d4e-4ace-9e20-3e65194737b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1cb987a0-5d4e-4ace-9e20-3e65194737b6" (UID: "1cb987a0-5d4e-4ace-9e20-3e65194737b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:35 crc kubenswrapper[4619]: I0126 11:15:35.468788 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cb987a0-5d4e-4ace-9e20-3e65194737b6-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "1cb987a0-5d4e-4ace-9e20-3e65194737b6" (UID: "1cb987a0-5d4e-4ace-9e20-3e65194737b6"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:35 crc kubenswrapper[4619]: I0126 11:15:35.503600 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jrqk\" (UniqueName: \"kubernetes.io/projected/1cb987a0-5d4e-4ace-9e20-3e65194737b6-kube-api-access-9jrqk\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:35 crc kubenswrapper[4619]: I0126 11:15:35.503832 4619 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cb987a0-5d4e-4ace-9e20-3e65194737b6-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:35 crc kubenswrapper[4619]: I0126 11:15:35.503892 4619 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cb987a0-5d4e-4ace-9e20-3e65194737b6-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:35 crc kubenswrapper[4619]: I0126 11:15:35.503956 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb987a0-5d4e-4ace-9e20-3e65194737b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:35 crc kubenswrapper[4619]: I0126 11:15:35.504017 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cb987a0-5d4e-4ace-9e20-3e65194737b6-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.142958 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.246268 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-run-httpd\") pod \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\" (UID: \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\") " Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.246316 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-log-httpd\") pod \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\" (UID: \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\") " Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.246401 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-scripts\") pod \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\" (UID: \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\") " Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.246457 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-combined-ca-bundle\") pod \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\" (UID: \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\") " Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.246670 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-sg-core-conf-yaml\") pod \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\" (UID: \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\") " Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.246700 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnlb7\" (UniqueName: \"kubernetes.io/projected/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-kube-api-access-xnlb7\") pod \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\" (UID: \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\") " Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.246729 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-config-data\") pod \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\" (UID: \"7067fd1d-9a57-41ce-9dae-c4d7b143ff53\") " Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.247385 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7067fd1d-9a57-41ce-9dae-c4d7b143ff53" (UID: "7067fd1d-9a57-41ce-9dae-c4d7b143ff53"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.248142 4619 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.248214 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7067fd1d-9a57-41ce-9dae-c4d7b143ff53" (UID: "7067fd1d-9a57-41ce-9dae-c4d7b143ff53"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.252915 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-scripts" (OuterVolumeSpecName: "scripts") pod "7067fd1d-9a57-41ce-9dae-c4d7b143ff53" (UID: "7067fd1d-9a57-41ce-9dae-c4d7b143ff53"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.253209 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-kube-api-access-xnlb7" (OuterVolumeSpecName: "kube-api-access-xnlb7") pod "7067fd1d-9a57-41ce-9dae-c4d7b143ff53" (UID: "7067fd1d-9a57-41ce-9dae-c4d7b143ff53"). InnerVolumeSpecName "kube-api-access-xnlb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.286786 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7067fd1d-9a57-41ce-9dae-c4d7b143ff53" (UID: "7067fd1d-9a57-41ce-9dae-c4d7b143ff53"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.299521 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1cb987a0-5d4e-4ace-9e20-3e65194737b6","Type":"ContainerDied","Data":"14530117363e93ae40c357604f64cac8a5ad3d8dd7cfe71cbc6d888c3b112b26"} Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.299572 4619 scope.go:117] "RemoveContainer" containerID="6d397f7feaa7bbcba4058926eae83063df8a3f445d7a44d834c5f3c2cf6ac468" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.299844 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.306488 4619 generic.go:334] "Generic (PLEG): container finished" podID="7067fd1d-9a57-41ce-9dae-c4d7b143ff53" containerID="426ad69375c536898c39d20f34d4396bd871ee09f518d8bb8bbd6dc3d42a1a0e" exitCode=137 Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.306554 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7067fd1d-9a57-41ce-9dae-c4d7b143ff53","Type":"ContainerDied","Data":"426ad69375c536898c39d20f34d4396bd871ee09f518d8bb8bbd6dc3d42a1a0e"} Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.306579 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7067fd1d-9a57-41ce-9dae-c4d7b143ff53","Type":"ContainerDied","Data":"128eda70000d11564e95cac7bfe036e0e6cd04b4ba44cff338c6204caf69c0bf"} Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.306671 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.314967 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"65f97f5d-163a-469b-b63e-f2763404b64c","Type":"ContainerStarted","Data":"a0e828320dba676094d146ff9cc03c6575191273c87be39cde803cdd0ef1cc82"} Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.315262 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.336080 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.336062998 podStartE2EDuration="2.336062998s" podCreationTimestamp="2026-01-26 11:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:15:36.335367679 +0000 UTC m=+1235.369408395" watchObservedRunningTime="2026-01-26 11:15:36.336062998 +0000 UTC m=+1235.370103714" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.350174 4619 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.350201 4619 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.350211 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnlb7\" (UniqueName: \"kubernetes.io/projected/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-kube-api-access-xnlb7\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.350220 4619 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.367325 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7067fd1d-9a57-41ce-9dae-c4d7b143ff53" (UID: "7067fd1d-9a57-41ce-9dae-c4d7b143ff53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.406548 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-config-data" (OuterVolumeSpecName: "config-data") pod "7067fd1d-9a57-41ce-9dae-c4d7b143ff53" (UID: "7067fd1d-9a57-41ce-9dae-c4d7b143ff53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.411538 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.424740 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.427195 4619 scope.go:117] "RemoveContainer" containerID="67b1887be548fd70dd204d0a2ada4ca8cbb170056d348c7236c3bcf4126be088" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.430951 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:15:36 crc kubenswrapper[4619]: E0126 11:15:36.431322 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb987a0-5d4e-4ace-9e20-3e65194737b6" containerName="nova-metadata-metadata" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.431339 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb987a0-5d4e-4ace-9e20-3e65194737b6" containerName="nova-metadata-metadata" Jan 26 11:15:36 crc kubenswrapper[4619]: E0126 11:15:36.431359 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7067fd1d-9a57-41ce-9dae-c4d7b143ff53" containerName="proxy-httpd" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.431365 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="7067fd1d-9a57-41ce-9dae-c4d7b143ff53" containerName="proxy-httpd" Jan 26 11:15:36 crc kubenswrapper[4619]: E0126 11:15:36.431378 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7067fd1d-9a57-41ce-9dae-c4d7b143ff53" containerName="ceilometer-central-agent" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.431384 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="7067fd1d-9a57-41ce-9dae-c4d7b143ff53" containerName="ceilometer-central-agent" Jan 26 11:15:36 crc kubenswrapper[4619]: E0126 11:15:36.431409 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7067fd1d-9a57-41ce-9dae-c4d7b143ff53" containerName="sg-core" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.431415 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="7067fd1d-9a57-41ce-9dae-c4d7b143ff53" containerName="sg-core" Jan 26 11:15:36 crc kubenswrapper[4619]: E0126 11:15:36.431430 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb987a0-5d4e-4ace-9e20-3e65194737b6" containerName="nova-metadata-log" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.431436 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb987a0-5d4e-4ace-9e20-3e65194737b6" containerName="nova-metadata-log" Jan 26 11:15:36 crc kubenswrapper[4619]: E0126 11:15:36.431447 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7067fd1d-9a57-41ce-9dae-c4d7b143ff53" containerName="ceilometer-notification-agent" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.431453 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="7067fd1d-9a57-41ce-9dae-c4d7b143ff53" containerName="ceilometer-notification-agent" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.431632 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb987a0-5d4e-4ace-9e20-3e65194737b6" containerName="nova-metadata-metadata" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.431645 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="7067fd1d-9a57-41ce-9dae-c4d7b143ff53" containerName="proxy-httpd" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.431655 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="7067fd1d-9a57-41ce-9dae-c4d7b143ff53" containerName="sg-core" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.431671 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb987a0-5d4e-4ace-9e20-3e65194737b6" containerName="nova-metadata-log" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.431683 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="7067fd1d-9a57-41ce-9dae-c4d7b143ff53" containerName="ceilometer-notification-agent" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.431696 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="7067fd1d-9a57-41ce-9dae-c4d7b143ff53" containerName="ceilometer-central-agent" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.432733 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.435196 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.435334 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.440595 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.453878 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.453910 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7067fd1d-9a57-41ce-9dae-c4d7b143ff53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.466903 4619 scope.go:117] "RemoveContainer" containerID="426ad69375c536898c39d20f34d4396bd871ee09f518d8bb8bbd6dc3d42a1a0e" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.488018 4619 scope.go:117] "RemoveContainer" containerID="a3c596e5ac9b06f6e2edaa59196e5a3178a7a0bda700aebccc55838788faf32a" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.504952 4619 scope.go:117] "RemoveContainer" containerID="2c0f46f6e861df70a19f1abde1563e6c6e56140a5a6c6b99c7e23b207e99eddb" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.523775 4619 scope.go:117] "RemoveContainer" containerID="8e0c9dc4d6ea7d6df01a9dfe5f9c9c033c80c38829125e7a986c20bb4d80a7a2" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.540681 4619 scope.go:117] "RemoveContainer" containerID="426ad69375c536898c39d20f34d4396bd871ee09f518d8bb8bbd6dc3d42a1a0e" Jan 26 11:15:36 crc kubenswrapper[4619]: E0126 11:15:36.548714 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"426ad69375c536898c39d20f34d4396bd871ee09f518d8bb8bbd6dc3d42a1a0e\": container with ID starting with 426ad69375c536898c39d20f34d4396bd871ee09f518d8bb8bbd6dc3d42a1a0e not found: ID does not exist" containerID="426ad69375c536898c39d20f34d4396bd871ee09f518d8bb8bbd6dc3d42a1a0e" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.548756 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"426ad69375c536898c39d20f34d4396bd871ee09f518d8bb8bbd6dc3d42a1a0e"} err="failed to get container status \"426ad69375c536898c39d20f34d4396bd871ee09f518d8bb8bbd6dc3d42a1a0e\": rpc error: code = NotFound desc = could not find container \"426ad69375c536898c39d20f34d4396bd871ee09f518d8bb8bbd6dc3d42a1a0e\": container with ID starting with 426ad69375c536898c39d20f34d4396bd871ee09f518d8bb8bbd6dc3d42a1a0e not found: ID does not exist" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.548798 4619 scope.go:117] "RemoveContainer" containerID="a3c596e5ac9b06f6e2edaa59196e5a3178a7a0bda700aebccc55838788faf32a" Jan 26 11:15:36 crc kubenswrapper[4619]: E0126 11:15:36.550924 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3c596e5ac9b06f6e2edaa59196e5a3178a7a0bda700aebccc55838788faf32a\": container with ID starting with a3c596e5ac9b06f6e2edaa59196e5a3178a7a0bda700aebccc55838788faf32a not found: ID does not exist" containerID="a3c596e5ac9b06f6e2edaa59196e5a3178a7a0bda700aebccc55838788faf32a" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.550954 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3c596e5ac9b06f6e2edaa59196e5a3178a7a0bda700aebccc55838788faf32a"} err="failed to get container status \"a3c596e5ac9b06f6e2edaa59196e5a3178a7a0bda700aebccc55838788faf32a\": rpc error: code = NotFound desc = could not find container \"a3c596e5ac9b06f6e2edaa59196e5a3178a7a0bda700aebccc55838788faf32a\": container with ID starting with a3c596e5ac9b06f6e2edaa59196e5a3178a7a0bda700aebccc55838788faf32a not found: ID does not exist" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.550974 4619 scope.go:117] "RemoveContainer" containerID="2c0f46f6e861df70a19f1abde1563e6c6e56140a5a6c6b99c7e23b207e99eddb" Jan 26 11:15:36 crc kubenswrapper[4619]: E0126 11:15:36.551551 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c0f46f6e861df70a19f1abde1563e6c6e56140a5a6c6b99c7e23b207e99eddb\": container with ID starting with 2c0f46f6e861df70a19f1abde1563e6c6e56140a5a6c6b99c7e23b207e99eddb not found: ID does not exist" containerID="2c0f46f6e861df70a19f1abde1563e6c6e56140a5a6c6b99c7e23b207e99eddb" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.551569 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c0f46f6e861df70a19f1abde1563e6c6e56140a5a6c6b99c7e23b207e99eddb"} err="failed to get container status \"2c0f46f6e861df70a19f1abde1563e6c6e56140a5a6c6b99c7e23b207e99eddb\": rpc error: code = NotFound desc = could not find container \"2c0f46f6e861df70a19f1abde1563e6c6e56140a5a6c6b99c7e23b207e99eddb\": container with ID starting with 2c0f46f6e861df70a19f1abde1563e6c6e56140a5a6c6b99c7e23b207e99eddb not found: ID does not exist" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.551597 4619 scope.go:117] "RemoveContainer" containerID="8e0c9dc4d6ea7d6df01a9dfe5f9c9c033c80c38829125e7a986c20bb4d80a7a2" Jan 26 11:15:36 crc kubenswrapper[4619]: E0126 11:15:36.551888 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e0c9dc4d6ea7d6df01a9dfe5f9c9c033c80c38829125e7a986c20bb4d80a7a2\": container with ID starting with 8e0c9dc4d6ea7d6df01a9dfe5f9c9c033c80c38829125e7a986c20bb4d80a7a2 not found: ID does not exist" containerID="8e0c9dc4d6ea7d6df01a9dfe5f9c9c033c80c38829125e7a986c20bb4d80a7a2" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.551927 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e0c9dc4d6ea7d6df01a9dfe5f9c9c033c80c38829125e7a986c20bb4d80a7a2"} err="failed to get container status \"8e0c9dc4d6ea7d6df01a9dfe5f9c9c033c80c38829125e7a986c20bb4d80a7a2\": rpc error: code = NotFound desc = could not find container \"8e0c9dc4d6ea7d6df01a9dfe5f9c9c033c80c38829125e7a986c20bb4d80a7a2\": container with ID starting with 8e0c9dc4d6ea7d6df01a9dfe5f9c9c033c80c38829125e7a986c20bb4d80a7a2 not found: ID does not exist" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.555263 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eb01a17-aaf3-41e3-b9e9-bca348d41a5e-logs\") pod \"nova-metadata-0\" (UID: \"9eb01a17-aaf3-41e3-b9e9-bca348d41a5e\") " pod="openstack/nova-metadata-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.555328 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb01a17-aaf3-41e3-b9e9-bca348d41a5e-config-data\") pod \"nova-metadata-0\" (UID: \"9eb01a17-aaf3-41e3-b9e9-bca348d41a5e\") " pod="openstack/nova-metadata-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.555452 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb01a17-aaf3-41e3-b9e9-bca348d41a5e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9eb01a17-aaf3-41e3-b9e9-bca348d41a5e\") " pod="openstack/nova-metadata-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.555491 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cg6z\" (UniqueName: \"kubernetes.io/projected/9eb01a17-aaf3-41e3-b9e9-bca348d41a5e-kube-api-access-5cg6z\") pod \"nova-metadata-0\" (UID: \"9eb01a17-aaf3-41e3-b9e9-bca348d41a5e\") " pod="openstack/nova-metadata-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.555528 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb01a17-aaf3-41e3-b9e9-bca348d41a5e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9eb01a17-aaf3-41e3-b9e9-bca348d41a5e\") " pod="openstack/nova-metadata-0" Jan 26 11:15:36 crc kubenswrapper[4619]: E0126 11:15:36.562181 4619 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="537bf1847d145967018fea2b4378312d5fa6c0e6af7efaf5775067b7d95a5f8c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 11:15:36 crc kubenswrapper[4619]: E0126 11:15:36.563511 4619 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="537bf1847d145967018fea2b4378312d5fa6c0e6af7efaf5775067b7d95a5f8c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 11:15:36 crc kubenswrapper[4619]: E0126 11:15:36.564546 4619 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="537bf1847d145967018fea2b4378312d5fa6c0e6af7efaf5775067b7d95a5f8c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 11:15:36 crc kubenswrapper[4619]: E0126 11:15:36.564569 4619 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="88b8c21e-24e1-4aeb-9da6-629bf3336e35" containerName="nova-scheduler-scheduler" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.637637 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.645406 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.656888 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb01a17-aaf3-41e3-b9e9-bca348d41a5e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9eb01a17-aaf3-41e3-b9e9-bca348d41a5e\") " pod="openstack/nova-metadata-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.656954 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cg6z\" (UniqueName: \"kubernetes.io/projected/9eb01a17-aaf3-41e3-b9e9-bca348d41a5e-kube-api-access-5cg6z\") pod \"nova-metadata-0\" (UID: \"9eb01a17-aaf3-41e3-b9e9-bca348d41a5e\") " pod="openstack/nova-metadata-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.656982 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb01a17-aaf3-41e3-b9e9-bca348d41a5e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9eb01a17-aaf3-41e3-b9e9-bca348d41a5e\") " pod="openstack/nova-metadata-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.657037 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eb01a17-aaf3-41e3-b9e9-bca348d41a5e-logs\") pod \"nova-metadata-0\" (UID: \"9eb01a17-aaf3-41e3-b9e9-bca348d41a5e\") " pod="openstack/nova-metadata-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.657086 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb01a17-aaf3-41e3-b9e9-bca348d41a5e-config-data\") pod \"nova-metadata-0\" (UID: \"9eb01a17-aaf3-41e3-b9e9-bca348d41a5e\") " pod="openstack/nova-metadata-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.657688 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eb01a17-aaf3-41e3-b9e9-bca348d41a5e-logs\") pod \"nova-metadata-0\" (UID: \"9eb01a17-aaf3-41e3-b9e9-bca348d41a5e\") " pod="openstack/nova-metadata-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.661180 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb01a17-aaf3-41e3-b9e9-bca348d41a5e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9eb01a17-aaf3-41e3-b9e9-bca348d41a5e\") " pod="openstack/nova-metadata-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.661187 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb01a17-aaf3-41e3-b9e9-bca348d41a5e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9eb01a17-aaf3-41e3-b9e9-bca348d41a5e\") " pod="openstack/nova-metadata-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.662925 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb01a17-aaf3-41e3-b9e9-bca348d41a5e-config-data\") pod \"nova-metadata-0\" (UID: \"9eb01a17-aaf3-41e3-b9e9-bca348d41a5e\") " pod="openstack/nova-metadata-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.664741 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.668786 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.670906 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.671071 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.674343 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cg6z\" (UniqueName: \"kubernetes.io/projected/9eb01a17-aaf3-41e3-b9e9-bca348d41a5e-kube-api-access-5cg6z\") pod \"nova-metadata-0\" (UID: \"9eb01a17-aaf3-41e3-b9e9-bca348d41a5e\") " pod="openstack/nova-metadata-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.709285 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.767032 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.859628 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba231012-5964-4056-812c-8e0991a03b1d-run-httpd\") pod \"ceilometer-0\" (UID: \"ba231012-5964-4056-812c-8e0991a03b1d\") " pod="openstack/ceilometer-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.859991 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2dcf\" (UniqueName: \"kubernetes.io/projected/ba231012-5964-4056-812c-8e0991a03b1d-kube-api-access-t2dcf\") pod \"ceilometer-0\" (UID: \"ba231012-5964-4056-812c-8e0991a03b1d\") " pod="openstack/ceilometer-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.860040 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba231012-5964-4056-812c-8e0991a03b1d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ba231012-5964-4056-812c-8e0991a03b1d\") " pod="openstack/ceilometer-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.860695 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba231012-5964-4056-812c-8e0991a03b1d-config-data\") pod \"ceilometer-0\" (UID: \"ba231012-5964-4056-812c-8e0991a03b1d\") " pod="openstack/ceilometer-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.860915 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba231012-5964-4056-812c-8e0991a03b1d-scripts\") pod \"ceilometer-0\" (UID: \"ba231012-5964-4056-812c-8e0991a03b1d\") " pod="openstack/ceilometer-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.860938 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba231012-5964-4056-812c-8e0991a03b1d-log-httpd\") pod \"ceilometer-0\" (UID: \"ba231012-5964-4056-812c-8e0991a03b1d\") " pod="openstack/ceilometer-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.860967 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba231012-5964-4056-812c-8e0991a03b1d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ba231012-5964-4056-812c-8e0991a03b1d\") " pod="openstack/ceilometer-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.962373 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba231012-5964-4056-812c-8e0991a03b1d-run-httpd\") pod \"ceilometer-0\" (UID: \"ba231012-5964-4056-812c-8e0991a03b1d\") " pod="openstack/ceilometer-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.962433 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2dcf\" (UniqueName: \"kubernetes.io/projected/ba231012-5964-4056-812c-8e0991a03b1d-kube-api-access-t2dcf\") pod \"ceilometer-0\" (UID: \"ba231012-5964-4056-812c-8e0991a03b1d\") " pod="openstack/ceilometer-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.962487 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba231012-5964-4056-812c-8e0991a03b1d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ba231012-5964-4056-812c-8e0991a03b1d\") " pod="openstack/ceilometer-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.962506 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba231012-5964-4056-812c-8e0991a03b1d-config-data\") pod \"ceilometer-0\" (UID: \"ba231012-5964-4056-812c-8e0991a03b1d\") " pod="openstack/ceilometer-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.962578 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba231012-5964-4056-812c-8e0991a03b1d-scripts\") pod \"ceilometer-0\" (UID: \"ba231012-5964-4056-812c-8e0991a03b1d\") " pod="openstack/ceilometer-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.962595 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba231012-5964-4056-812c-8e0991a03b1d-log-httpd\") pod \"ceilometer-0\" (UID: \"ba231012-5964-4056-812c-8e0991a03b1d\") " pod="openstack/ceilometer-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.962627 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba231012-5964-4056-812c-8e0991a03b1d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ba231012-5964-4056-812c-8e0991a03b1d\") " pod="openstack/ceilometer-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.964078 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba231012-5964-4056-812c-8e0991a03b1d-run-httpd\") pod \"ceilometer-0\" (UID: \"ba231012-5964-4056-812c-8e0991a03b1d\") " pod="openstack/ceilometer-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.964163 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba231012-5964-4056-812c-8e0991a03b1d-log-httpd\") pod \"ceilometer-0\" (UID: \"ba231012-5964-4056-812c-8e0991a03b1d\") " pod="openstack/ceilometer-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.969227 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba231012-5964-4056-812c-8e0991a03b1d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ba231012-5964-4056-812c-8e0991a03b1d\") " pod="openstack/ceilometer-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.969314 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba231012-5964-4056-812c-8e0991a03b1d-scripts\") pod \"ceilometer-0\" (UID: \"ba231012-5964-4056-812c-8e0991a03b1d\") " pod="openstack/ceilometer-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.971911 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba231012-5964-4056-812c-8e0991a03b1d-config-data\") pod \"ceilometer-0\" (UID: \"ba231012-5964-4056-812c-8e0991a03b1d\") " pod="openstack/ceilometer-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.972036 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba231012-5964-4056-812c-8e0991a03b1d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ba231012-5964-4056-812c-8e0991a03b1d\") " pod="openstack/ceilometer-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.980975 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2dcf\" (UniqueName: \"kubernetes.io/projected/ba231012-5964-4056-812c-8e0991a03b1d-kube-api-access-t2dcf\") pod \"ceilometer-0\" (UID: \"ba231012-5964-4056-812c-8e0991a03b1d\") " pod="openstack/ceilometer-0" Jan 26 11:15:36 crc kubenswrapper[4619]: I0126 11:15:36.984697 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:15:37 crc kubenswrapper[4619]: I0126 11:15:37.199962 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:15:37 crc kubenswrapper[4619]: I0126 11:15:37.276381 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cb987a0-5d4e-4ace-9e20-3e65194737b6" path="/var/lib/kubelet/pods/1cb987a0-5d4e-4ace-9e20-3e65194737b6/volumes" Jan 26 11:15:37 crc kubenswrapper[4619]: I0126 11:15:37.277050 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7067fd1d-9a57-41ce-9dae-c4d7b143ff53" path="/var/lib/kubelet/pods/7067fd1d-9a57-41ce-9dae-c4d7b143ff53/volumes" Jan 26 11:15:37 crc kubenswrapper[4619]: I0126 11:15:37.335550 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9eb01a17-aaf3-41e3-b9e9-bca348d41a5e","Type":"ContainerStarted","Data":"6db8706fa66c2be69736b72fa10f04d0171c977a3206b0ec4fc9dd20e237bfba"} Jan 26 11:15:37 crc kubenswrapper[4619]: I0126 11:15:37.480888 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:15:37 crc kubenswrapper[4619]: W0126 11:15:37.490481 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba231012_5964_4056_812c_8e0991a03b1d.slice/crio-e3905d6532c2ce9385dae72d1a72869197491706d903736517b69f73441be01e WatchSource:0}: Error finding container e3905d6532c2ce9385dae72d1a72869197491706d903736517b69f73441be01e: Status 404 returned error can't find the container with id e3905d6532c2ce9385dae72d1a72869197491706d903736517b69f73441be01e Jan 26 11:15:37 crc kubenswrapper[4619]: I0126 11:15:37.496687 4619 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 11:15:38 crc kubenswrapper[4619]: I0126 11:15:38.354574 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9eb01a17-aaf3-41e3-b9e9-bca348d41a5e","Type":"ContainerStarted","Data":"18f67d0afd4a84dbce7261e102c759d66e24a231e69767e45610b4b26b50e1b3"} Jan 26 11:15:38 crc kubenswrapper[4619]: I0126 11:15:38.354952 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9eb01a17-aaf3-41e3-b9e9-bca348d41a5e","Type":"ContainerStarted","Data":"c15231189ba7eb4c45808f091e1144ff4e7a76eb7bbf5463fd487bfd923f1e5f"} Jan 26 11:15:38 crc kubenswrapper[4619]: I0126 11:15:38.360373 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba231012-5964-4056-812c-8e0991a03b1d","Type":"ContainerStarted","Data":"e29e8d6c8623c88cf3a9326807acff2c51035b011dc0ab1268cea528b615b66d"} Jan 26 11:15:38 crc kubenswrapper[4619]: I0126 11:15:38.360452 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba231012-5964-4056-812c-8e0991a03b1d","Type":"ContainerStarted","Data":"e3905d6532c2ce9385dae72d1a72869197491706d903736517b69f73441be01e"} Jan 26 11:15:38 crc kubenswrapper[4619]: I0126 11:15:38.381450 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.381431664 podStartE2EDuration="2.381431664s" podCreationTimestamp="2026-01-26 11:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:15:38.378504555 +0000 UTC m=+1237.412545271" watchObservedRunningTime="2026-01-26 11:15:38.381431664 +0000 UTC m=+1237.415472380" Jan 26 11:15:38 crc kubenswrapper[4619]: I0126 11:15:38.743651 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 11:15:38 crc kubenswrapper[4619]: I0126 11:15:38.899835 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88b8c21e-24e1-4aeb-9da6-629bf3336e35-combined-ca-bundle\") pod \"88b8c21e-24e1-4aeb-9da6-629bf3336e35\" (UID: \"88b8c21e-24e1-4aeb-9da6-629bf3336e35\") " Jan 26 11:15:38 crc kubenswrapper[4619]: I0126 11:15:38.900002 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64zdd\" (UniqueName: \"kubernetes.io/projected/88b8c21e-24e1-4aeb-9da6-629bf3336e35-kube-api-access-64zdd\") pod \"88b8c21e-24e1-4aeb-9da6-629bf3336e35\" (UID: \"88b8c21e-24e1-4aeb-9da6-629bf3336e35\") " Jan 26 11:15:38 crc kubenswrapper[4619]: I0126 11:15:38.900136 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88b8c21e-24e1-4aeb-9da6-629bf3336e35-config-data\") pod \"88b8c21e-24e1-4aeb-9da6-629bf3336e35\" (UID: \"88b8c21e-24e1-4aeb-9da6-629bf3336e35\") " Jan 26 11:15:38 crc kubenswrapper[4619]: I0126 11:15:38.911533 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88b8c21e-24e1-4aeb-9da6-629bf3336e35-kube-api-access-64zdd" (OuterVolumeSpecName: "kube-api-access-64zdd") pod "88b8c21e-24e1-4aeb-9da6-629bf3336e35" (UID: "88b8c21e-24e1-4aeb-9da6-629bf3336e35"). InnerVolumeSpecName "kube-api-access-64zdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:15:38 crc kubenswrapper[4619]: I0126 11:15:38.935699 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88b8c21e-24e1-4aeb-9da6-629bf3336e35-config-data" (OuterVolumeSpecName: "config-data") pod "88b8c21e-24e1-4aeb-9da6-629bf3336e35" (UID: "88b8c21e-24e1-4aeb-9da6-629bf3336e35"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:38 crc kubenswrapper[4619]: I0126 11:15:38.942567 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88b8c21e-24e1-4aeb-9da6-629bf3336e35-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88b8c21e-24e1-4aeb-9da6-629bf3336e35" (UID: "88b8c21e-24e1-4aeb-9da6-629bf3336e35"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.002863 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64zdd\" (UniqueName: \"kubernetes.io/projected/88b8c21e-24e1-4aeb-9da6-629bf3336e35-kube-api-access-64zdd\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.003075 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88b8c21e-24e1-4aeb-9da6-629bf3336e35-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.003154 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88b8c21e-24e1-4aeb-9da6-629bf3336e35-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.296299 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.374317 4619 generic.go:334] "Generic (PLEG): container finished" podID="aedaf21e-3f5f-44ab-a83f-b12671950ae2" containerID="7ac33965ec3c05b6153934f43b012a79368e65283edae1dd1bd9a04c689b5d1d" exitCode=0 Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.374728 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aedaf21e-3f5f-44ab-a83f-b12671950ae2","Type":"ContainerDied","Data":"7ac33965ec3c05b6153934f43b012a79368e65283edae1dd1bd9a04c689b5d1d"} Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.374797 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aedaf21e-3f5f-44ab-a83f-b12671950ae2","Type":"ContainerDied","Data":"94e021bcfce5e8b7bdd55d12aae8801a3e4deaf52d6b8502f69ff19ec5cec39b"} Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.374810 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.374821 4619 scope.go:117] "RemoveContainer" containerID="7ac33965ec3c05b6153934f43b012a79368e65283edae1dd1bd9a04c689b5d1d" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.380418 4619 generic.go:334] "Generic (PLEG): container finished" podID="88b8c21e-24e1-4aeb-9da6-629bf3336e35" containerID="537bf1847d145967018fea2b4378312d5fa6c0e6af7efaf5775067b7d95a5f8c" exitCode=0 Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.380467 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"88b8c21e-24e1-4aeb-9da6-629bf3336e35","Type":"ContainerDied","Data":"537bf1847d145967018fea2b4378312d5fa6c0e6af7efaf5775067b7d95a5f8c"} Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.380498 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"88b8c21e-24e1-4aeb-9da6-629bf3336e35","Type":"ContainerDied","Data":"1a364cc7bf95f23830e95e58a59919700cc7d9ee23eb4f9467cda41f46ccc740"} Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.380532 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.383762 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba231012-5964-4056-812c-8e0991a03b1d","Type":"ContainerStarted","Data":"d00cb79f0a40b6bf1692d7d9296ddf04f0ce95ee3d304997e7e19820be54b213"} Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.405475 4619 scope.go:117] "RemoveContainer" containerID="d3a3e7791d2a81b5a78ff54ff5e95ab3504defe7fcb5909bea3a565b5e85afdf" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.411681 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aedaf21e-3f5f-44ab-a83f-b12671950ae2-config-data\") pod \"aedaf21e-3f5f-44ab-a83f-b12671950ae2\" (UID: \"aedaf21e-3f5f-44ab-a83f-b12671950ae2\") " Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.411759 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aedaf21e-3f5f-44ab-a83f-b12671950ae2-combined-ca-bundle\") pod \"aedaf21e-3f5f-44ab-a83f-b12671950ae2\" (UID: \"aedaf21e-3f5f-44ab-a83f-b12671950ae2\") " Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.411829 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aedaf21e-3f5f-44ab-a83f-b12671950ae2-logs\") pod \"aedaf21e-3f5f-44ab-a83f-b12671950ae2\" (UID: \"aedaf21e-3f5f-44ab-a83f-b12671950ae2\") " Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.411859 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ms9s\" (UniqueName: \"kubernetes.io/projected/aedaf21e-3f5f-44ab-a83f-b12671950ae2-kube-api-access-6ms9s\") pod \"aedaf21e-3f5f-44ab-a83f-b12671950ae2\" (UID: \"aedaf21e-3f5f-44ab-a83f-b12671950ae2\") " Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.416244 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aedaf21e-3f5f-44ab-a83f-b12671950ae2-logs" (OuterVolumeSpecName: "logs") pod "aedaf21e-3f5f-44ab-a83f-b12671950ae2" (UID: "aedaf21e-3f5f-44ab-a83f-b12671950ae2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.430815 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aedaf21e-3f5f-44ab-a83f-b12671950ae2-kube-api-access-6ms9s" (OuterVolumeSpecName: "kube-api-access-6ms9s") pod "aedaf21e-3f5f-44ab-a83f-b12671950ae2" (UID: "aedaf21e-3f5f-44ab-a83f-b12671950ae2"). InnerVolumeSpecName "kube-api-access-6ms9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.477038 4619 scope.go:117] "RemoveContainer" containerID="7ac33965ec3c05b6153934f43b012a79368e65283edae1dd1bd9a04c689b5d1d" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.482716 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 11:15:39 crc kubenswrapper[4619]: E0126 11:15:39.482967 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ac33965ec3c05b6153934f43b012a79368e65283edae1dd1bd9a04c689b5d1d\": container with ID starting with 7ac33965ec3c05b6153934f43b012a79368e65283edae1dd1bd9a04c689b5d1d not found: ID does not exist" containerID="7ac33965ec3c05b6153934f43b012a79368e65283edae1dd1bd9a04c689b5d1d" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.483020 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ac33965ec3c05b6153934f43b012a79368e65283edae1dd1bd9a04c689b5d1d"} err="failed to get container status \"7ac33965ec3c05b6153934f43b012a79368e65283edae1dd1bd9a04c689b5d1d\": rpc error: code = NotFound desc = could not find container \"7ac33965ec3c05b6153934f43b012a79368e65283edae1dd1bd9a04c689b5d1d\": container with ID starting with 7ac33965ec3c05b6153934f43b012a79368e65283edae1dd1bd9a04c689b5d1d not found: ID does not exist" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.483051 4619 scope.go:117] "RemoveContainer" containerID="d3a3e7791d2a81b5a78ff54ff5e95ab3504defe7fcb5909bea3a565b5e85afdf" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.483182 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aedaf21e-3f5f-44ab-a83f-b12671950ae2-config-data" (OuterVolumeSpecName: "config-data") pod "aedaf21e-3f5f-44ab-a83f-b12671950ae2" (UID: "aedaf21e-3f5f-44ab-a83f-b12671950ae2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:39 crc kubenswrapper[4619]: E0126 11:15:39.489292 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3a3e7791d2a81b5a78ff54ff5e95ab3504defe7fcb5909bea3a565b5e85afdf\": container with ID starting with d3a3e7791d2a81b5a78ff54ff5e95ab3504defe7fcb5909bea3a565b5e85afdf not found: ID does not exist" containerID="d3a3e7791d2a81b5a78ff54ff5e95ab3504defe7fcb5909bea3a565b5e85afdf" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.489324 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3a3e7791d2a81b5a78ff54ff5e95ab3504defe7fcb5909bea3a565b5e85afdf"} err="failed to get container status \"d3a3e7791d2a81b5a78ff54ff5e95ab3504defe7fcb5909bea3a565b5e85afdf\": rpc error: code = NotFound desc = could not find container \"d3a3e7791d2a81b5a78ff54ff5e95ab3504defe7fcb5909bea3a565b5e85afdf\": container with ID starting with d3a3e7791d2a81b5a78ff54ff5e95ab3504defe7fcb5909bea3a565b5e85afdf not found: ID does not exist" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.489350 4619 scope.go:117] "RemoveContainer" containerID="537bf1847d145967018fea2b4378312d5fa6c0e6af7efaf5775067b7d95a5f8c" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.495740 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.505064 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 11:15:39 crc kubenswrapper[4619]: E0126 11:15:39.505452 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aedaf21e-3f5f-44ab-a83f-b12671950ae2" containerName="nova-api-log" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.505468 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="aedaf21e-3f5f-44ab-a83f-b12671950ae2" containerName="nova-api-log" Jan 26 11:15:39 crc kubenswrapper[4619]: E0126 11:15:39.505491 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88b8c21e-24e1-4aeb-9da6-629bf3336e35" containerName="nova-scheduler-scheduler" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.505498 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="88b8c21e-24e1-4aeb-9da6-629bf3336e35" containerName="nova-scheduler-scheduler" Jan 26 11:15:39 crc kubenswrapper[4619]: E0126 11:15:39.505516 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aedaf21e-3f5f-44ab-a83f-b12671950ae2" containerName="nova-api-api" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.505522 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="aedaf21e-3f5f-44ab-a83f-b12671950ae2" containerName="nova-api-api" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.506122 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aedaf21e-3f5f-44ab-a83f-b12671950ae2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aedaf21e-3f5f-44ab-a83f-b12671950ae2" (UID: "aedaf21e-3f5f-44ab-a83f-b12671950ae2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.506168 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="aedaf21e-3f5f-44ab-a83f-b12671950ae2" containerName="nova-api-log" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.506186 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="88b8c21e-24e1-4aeb-9da6-629bf3336e35" containerName="nova-scheduler-scheduler" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.506218 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="aedaf21e-3f5f-44ab-a83f-b12671950ae2" containerName="nova-api-api" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.506813 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.509741 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.513498 4619 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aedaf21e-3f5f-44ab-a83f-b12671950ae2-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.513517 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ms9s\" (UniqueName: \"kubernetes.io/projected/aedaf21e-3f5f-44ab-a83f-b12671950ae2-kube-api-access-6ms9s\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.513526 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aedaf21e-3f5f-44ab-a83f-b12671950ae2-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.513535 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aedaf21e-3f5f-44ab-a83f-b12671950ae2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.516659 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.530331 4619 scope.go:117] "RemoveContainer" containerID="537bf1847d145967018fea2b4378312d5fa6c0e6af7efaf5775067b7d95a5f8c" Jan 26 11:15:39 crc kubenswrapper[4619]: E0126 11:15:39.535835 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"537bf1847d145967018fea2b4378312d5fa6c0e6af7efaf5775067b7d95a5f8c\": container with ID starting with 537bf1847d145967018fea2b4378312d5fa6c0e6af7efaf5775067b7d95a5f8c not found: ID does not exist" containerID="537bf1847d145967018fea2b4378312d5fa6c0e6af7efaf5775067b7d95a5f8c" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.535875 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"537bf1847d145967018fea2b4378312d5fa6c0e6af7efaf5775067b7d95a5f8c"} err="failed to get container status \"537bf1847d145967018fea2b4378312d5fa6c0e6af7efaf5775067b7d95a5f8c\": rpc error: code = NotFound desc = could not find container \"537bf1847d145967018fea2b4378312d5fa6c0e6af7efaf5775067b7d95a5f8c\": container with ID starting with 537bf1847d145967018fea2b4378312d5fa6c0e6af7efaf5775067b7d95a5f8c not found: ID does not exist" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.615132 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93659a38-4ded-4760-bd12-e6eddbe28f53-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"93659a38-4ded-4760-bd12-e6eddbe28f53\") " pod="openstack/nova-scheduler-0" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.615163 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93659a38-4ded-4760-bd12-e6eddbe28f53-config-data\") pod \"nova-scheduler-0\" (UID: \"93659a38-4ded-4760-bd12-e6eddbe28f53\") " pod="openstack/nova-scheduler-0" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.615347 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m8mg\" (UniqueName: \"kubernetes.io/projected/93659a38-4ded-4760-bd12-e6eddbe28f53-kube-api-access-5m8mg\") pod \"nova-scheduler-0\" (UID: \"93659a38-4ded-4760-bd12-e6eddbe28f53\") " pod="openstack/nova-scheduler-0" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.702313 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.709561 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.717330 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93659a38-4ded-4760-bd12-e6eddbe28f53-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"93659a38-4ded-4760-bd12-e6eddbe28f53\") " pod="openstack/nova-scheduler-0" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.717377 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93659a38-4ded-4760-bd12-e6eddbe28f53-config-data\") pod \"nova-scheduler-0\" (UID: \"93659a38-4ded-4760-bd12-e6eddbe28f53\") " pod="openstack/nova-scheduler-0" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.717475 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m8mg\" (UniqueName: \"kubernetes.io/projected/93659a38-4ded-4760-bd12-e6eddbe28f53-kube-api-access-5m8mg\") pod \"nova-scheduler-0\" (UID: \"93659a38-4ded-4760-bd12-e6eddbe28f53\") " pod="openstack/nova-scheduler-0" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.720797 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93659a38-4ded-4760-bd12-e6eddbe28f53-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"93659a38-4ded-4760-bd12-e6eddbe28f53\") " pod="openstack/nova-scheduler-0" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.725300 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93659a38-4ded-4760-bd12-e6eddbe28f53-config-data\") pod \"nova-scheduler-0\" (UID: \"93659a38-4ded-4760-bd12-e6eddbe28f53\") " pod="openstack/nova-scheduler-0" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.730009 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.731472 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.733722 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.742810 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.750223 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m8mg\" (UniqueName: \"kubernetes.io/projected/93659a38-4ded-4760-bd12-e6eddbe28f53-kube-api-access-5m8mg\") pod \"nova-scheduler-0\" (UID: \"93659a38-4ded-4760-bd12-e6eddbe28f53\") " pod="openstack/nova-scheduler-0" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.818804 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b19b268b-c1eb-4093-87b8-f69e4d95c9d3-logs\") pod \"nova-api-0\" (UID: \"b19b268b-c1eb-4093-87b8-f69e4d95c9d3\") " pod="openstack/nova-api-0" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.818871 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b19b268b-c1eb-4093-87b8-f69e4d95c9d3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b19b268b-c1eb-4093-87b8-f69e4d95c9d3\") " pod="openstack/nova-api-0" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.818984 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh5j7\" (UniqueName: \"kubernetes.io/projected/b19b268b-c1eb-4093-87b8-f69e4d95c9d3-kube-api-access-bh5j7\") pod \"nova-api-0\" (UID: \"b19b268b-c1eb-4093-87b8-f69e4d95c9d3\") " pod="openstack/nova-api-0" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.819014 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b19b268b-c1eb-4093-87b8-f69e4d95c9d3-config-data\") pod \"nova-api-0\" (UID: \"b19b268b-c1eb-4093-87b8-f69e4d95c9d3\") " pod="openstack/nova-api-0" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.829077 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.920166 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b19b268b-c1eb-4093-87b8-f69e4d95c9d3-config-data\") pod \"nova-api-0\" (UID: \"b19b268b-c1eb-4093-87b8-f69e4d95c9d3\") " pod="openstack/nova-api-0" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.920256 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b19b268b-c1eb-4093-87b8-f69e4d95c9d3-logs\") pod \"nova-api-0\" (UID: \"b19b268b-c1eb-4093-87b8-f69e4d95c9d3\") " pod="openstack/nova-api-0" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.920295 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b19b268b-c1eb-4093-87b8-f69e4d95c9d3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b19b268b-c1eb-4093-87b8-f69e4d95c9d3\") " pod="openstack/nova-api-0" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.920366 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh5j7\" (UniqueName: \"kubernetes.io/projected/b19b268b-c1eb-4093-87b8-f69e4d95c9d3-kube-api-access-bh5j7\") pod \"nova-api-0\" (UID: \"b19b268b-c1eb-4093-87b8-f69e4d95c9d3\") " pod="openstack/nova-api-0" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.921062 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b19b268b-c1eb-4093-87b8-f69e4d95c9d3-logs\") pod \"nova-api-0\" (UID: \"b19b268b-c1eb-4093-87b8-f69e4d95c9d3\") " pod="openstack/nova-api-0" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.924202 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b19b268b-c1eb-4093-87b8-f69e4d95c9d3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b19b268b-c1eb-4093-87b8-f69e4d95c9d3\") " pod="openstack/nova-api-0" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.928802 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b19b268b-c1eb-4093-87b8-f69e4d95c9d3-config-data\") pod \"nova-api-0\" (UID: \"b19b268b-c1eb-4093-87b8-f69e4d95c9d3\") " pod="openstack/nova-api-0" Jan 26 11:15:39 crc kubenswrapper[4619]: I0126 11:15:39.940072 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh5j7\" (UniqueName: \"kubernetes.io/projected/b19b268b-c1eb-4093-87b8-f69e4d95c9d3-kube-api-access-bh5j7\") pod \"nova-api-0\" (UID: \"b19b268b-c1eb-4093-87b8-f69e4d95c9d3\") " pod="openstack/nova-api-0" Jan 26 11:15:40 crc kubenswrapper[4619]: I0126 11:15:40.051449 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 11:15:40 crc kubenswrapper[4619]: I0126 11:15:40.258060 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 11:15:40 crc kubenswrapper[4619]: I0126 11:15:40.392506 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:15:40 crc kubenswrapper[4619]: W0126 11:15:40.393778 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb19b268b_c1eb_4093_87b8_f69e4d95c9d3.slice/crio-615bd1147b44a40cdcad20c37fd594d9194201e46ea29a6a292fa485ba35ca15 WatchSource:0}: Error finding container 615bd1147b44a40cdcad20c37fd594d9194201e46ea29a6a292fa485ba35ca15: Status 404 returned error can't find the container with id 615bd1147b44a40cdcad20c37fd594d9194201e46ea29a6a292fa485ba35ca15 Jan 26 11:15:40 crc kubenswrapper[4619]: I0126 11:15:40.401520 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba231012-5964-4056-812c-8e0991a03b1d","Type":"ContainerStarted","Data":"3f6146b7cb7213e683a812becd12f01a045743fec84fe668efbec7b7e1802053"} Jan 26 11:15:40 crc kubenswrapper[4619]: I0126 11:15:40.404904 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"93659a38-4ded-4760-bd12-e6eddbe28f53","Type":"ContainerStarted","Data":"3bb0507d81ab6f290e10729bfa54fdc1f333d600fe7e818a731730b6256efbaf"} Jan 26 11:15:41 crc kubenswrapper[4619]: I0126 11:15:41.313214 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88b8c21e-24e1-4aeb-9da6-629bf3336e35" path="/var/lib/kubelet/pods/88b8c21e-24e1-4aeb-9da6-629bf3336e35/volumes" Jan 26 11:15:41 crc kubenswrapper[4619]: I0126 11:15:41.314471 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aedaf21e-3f5f-44ab-a83f-b12671950ae2" path="/var/lib/kubelet/pods/aedaf21e-3f5f-44ab-a83f-b12671950ae2/volumes" Jan 26 11:15:41 crc kubenswrapper[4619]: I0126 11:15:41.427245 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba231012-5964-4056-812c-8e0991a03b1d","Type":"ContainerStarted","Data":"f677017898647b96b295ed81a2f14a98db92cc3d5a0fdf20881e967ae12afdfa"} Jan 26 11:15:41 crc kubenswrapper[4619]: I0126 11:15:41.427954 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 11:15:41 crc kubenswrapper[4619]: I0126 11:15:41.434037 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"93659a38-4ded-4760-bd12-e6eddbe28f53","Type":"ContainerStarted","Data":"fe8acd747f9b4889c158c378547dbe65db294d9bc1ac84b2063fffd5ac8fa2cd"} Jan 26 11:15:41 crc kubenswrapper[4619]: I0126 11:15:41.439085 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b19b268b-c1eb-4093-87b8-f69e4d95c9d3","Type":"ContainerStarted","Data":"2b98f9361ce497323075fe5d4e783822f7a4dfb3541cb9bffc399eb67752c788"} Jan 26 11:15:41 crc kubenswrapper[4619]: I0126 11:15:41.439127 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b19b268b-c1eb-4093-87b8-f69e4d95c9d3","Type":"ContainerStarted","Data":"2f601cd67188ff54b9b82b435fcab41b5b7a08056ddbd1e8c684cb5a6d715731"} Jan 26 11:15:41 crc kubenswrapper[4619]: I0126 11:15:41.439138 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b19b268b-c1eb-4093-87b8-f69e4d95c9d3","Type":"ContainerStarted","Data":"615bd1147b44a40cdcad20c37fd594d9194201e46ea29a6a292fa485ba35ca15"} Jan 26 11:15:41 crc kubenswrapper[4619]: I0126 11:15:41.460247 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.005432292 podStartE2EDuration="5.460232696s" podCreationTimestamp="2026-01-26 11:15:36 +0000 UTC" firstStartedPulling="2026-01-26 11:15:37.496482462 +0000 UTC m=+1236.530523168" lastFinishedPulling="2026-01-26 11:15:40.951282856 +0000 UTC m=+1239.985323572" observedRunningTime="2026-01-26 11:15:41.446270822 +0000 UTC m=+1240.480311538" watchObservedRunningTime="2026-01-26 11:15:41.460232696 +0000 UTC m=+1240.494273412" Jan 26 11:15:41 crc kubenswrapper[4619]: I0126 11:15:41.479090 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.4790739 podStartE2EDuration="2.4790739s" podCreationTimestamp="2026-01-26 11:15:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:15:41.476107281 +0000 UTC m=+1240.510147987" watchObservedRunningTime="2026-01-26 11:15:41.4790739 +0000 UTC m=+1240.513114616" Jan 26 11:15:41 crc kubenswrapper[4619]: I0126 11:15:41.501307 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.501291325 podStartE2EDuration="2.501291325s" podCreationTimestamp="2026-01-26 11:15:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:15:41.49591268 +0000 UTC m=+1240.529953396" watchObservedRunningTime="2026-01-26 11:15:41.501291325 +0000 UTC m=+1240.535332041" Jan 26 11:15:41 crc kubenswrapper[4619]: I0126 11:15:41.768243 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 11:15:41 crc kubenswrapper[4619]: I0126 11:15:41.768526 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 11:15:44 crc kubenswrapper[4619]: I0126 11:15:44.657609 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 26 11:15:44 crc kubenswrapper[4619]: I0126 11:15:44.830236 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 11:15:46 crc kubenswrapper[4619]: I0126 11:15:46.767583 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 11:15:46 crc kubenswrapper[4619]: I0126 11:15:46.768082 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 11:15:47 crc kubenswrapper[4619]: I0126 11:15:47.784011 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9eb01a17-aaf3-41e3-b9e9-bca348d41a5e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 11:15:47 crc kubenswrapper[4619]: I0126 11:15:47.784114 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9eb01a17-aaf3-41e3-b9e9-bca348d41a5e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 11:15:49 crc kubenswrapper[4619]: I0126 11:15:49.830277 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 11:15:49 crc kubenswrapper[4619]: I0126 11:15:49.872502 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 11:15:50 crc kubenswrapper[4619]: I0126 11:15:50.052392 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 11:15:50 crc kubenswrapper[4619]: I0126 11:15:50.052719 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 11:15:50 crc kubenswrapper[4619]: I0126 11:15:50.571101 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 11:15:51 crc kubenswrapper[4619]: I0126 11:15:51.202825 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b19b268b-c1eb-4093-87b8-f69e4d95c9d3" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.201:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 11:15:51 crc kubenswrapper[4619]: I0126 11:15:51.203073 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b19b268b-c1eb-4093-87b8-f69e4d95c9d3" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.201:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 11:15:56 crc kubenswrapper[4619]: I0126 11:15:56.774817 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 11:15:56 crc kubenswrapper[4619]: I0126 11:15:56.777258 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 11:15:56 crc kubenswrapper[4619]: I0126 11:15:56.791496 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 11:15:57 crc kubenswrapper[4619]: I0126 11:15:57.523437 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:15:57 crc kubenswrapper[4619]: I0126 11:15:57.618353 4619 generic.go:334] "Generic (PLEG): container finished" podID="278e84bc-0704-4e94-af43-82b629145b59" containerID="e7c38eebc5102b8beaa684db8fe736ac93049a43d9b8458ccf311ed8662af939" exitCode=137 Jan 26 11:15:57 crc kubenswrapper[4619]: I0126 11:15:57.618768 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:15:57 crc kubenswrapper[4619]: I0126 11:15:57.619423 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"278e84bc-0704-4e94-af43-82b629145b59","Type":"ContainerDied","Data":"e7c38eebc5102b8beaa684db8fe736ac93049a43d9b8458ccf311ed8662af939"} Jan 26 11:15:57 crc kubenswrapper[4619]: I0126 11:15:57.619457 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"278e84bc-0704-4e94-af43-82b629145b59","Type":"ContainerDied","Data":"16386aac70b08decbd0f9b131d78c6ae36c83f20e7e2c7c4bcd4603fb50812cc"} Jan 26 11:15:57 crc kubenswrapper[4619]: I0126 11:15:57.619505 4619 scope.go:117] "RemoveContainer" containerID="e7c38eebc5102b8beaa684db8fe736ac93049a43d9b8458ccf311ed8662af939" Jan 26 11:15:57 crc kubenswrapper[4619]: I0126 11:15:57.624468 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 11:15:57 crc kubenswrapper[4619]: I0126 11:15:57.681879 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k29q8\" (UniqueName: \"kubernetes.io/projected/278e84bc-0704-4e94-af43-82b629145b59-kube-api-access-k29q8\") pod \"278e84bc-0704-4e94-af43-82b629145b59\" (UID: \"278e84bc-0704-4e94-af43-82b629145b59\") " Jan 26 11:15:57 crc kubenswrapper[4619]: I0126 11:15:57.682084 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/278e84bc-0704-4e94-af43-82b629145b59-config-data\") pod \"278e84bc-0704-4e94-af43-82b629145b59\" (UID: \"278e84bc-0704-4e94-af43-82b629145b59\") " Jan 26 11:15:57 crc kubenswrapper[4619]: I0126 11:15:57.682200 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/278e84bc-0704-4e94-af43-82b629145b59-combined-ca-bundle\") pod \"278e84bc-0704-4e94-af43-82b629145b59\" (UID: \"278e84bc-0704-4e94-af43-82b629145b59\") " Jan 26 11:15:57 crc kubenswrapper[4619]: I0126 11:15:57.689736 4619 scope.go:117] "RemoveContainer" containerID="e7c38eebc5102b8beaa684db8fe736ac93049a43d9b8458ccf311ed8662af939" Jan 26 11:15:57 crc kubenswrapper[4619]: E0126 11:15:57.690320 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7c38eebc5102b8beaa684db8fe736ac93049a43d9b8458ccf311ed8662af939\": container with ID starting with e7c38eebc5102b8beaa684db8fe736ac93049a43d9b8458ccf311ed8662af939 not found: ID does not exist" containerID="e7c38eebc5102b8beaa684db8fe736ac93049a43d9b8458ccf311ed8662af939" Jan 26 11:15:57 crc kubenswrapper[4619]: I0126 11:15:57.690346 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7c38eebc5102b8beaa684db8fe736ac93049a43d9b8458ccf311ed8662af939"} err="failed to get container status \"e7c38eebc5102b8beaa684db8fe736ac93049a43d9b8458ccf311ed8662af939\": rpc error: code = NotFound desc = could not find container \"e7c38eebc5102b8beaa684db8fe736ac93049a43d9b8458ccf311ed8662af939\": container with ID starting with e7c38eebc5102b8beaa684db8fe736ac93049a43d9b8458ccf311ed8662af939 not found: ID does not exist" Jan 26 11:15:57 crc kubenswrapper[4619]: I0126 11:15:57.711654 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/278e84bc-0704-4e94-af43-82b629145b59-kube-api-access-k29q8" (OuterVolumeSpecName: "kube-api-access-k29q8") pod "278e84bc-0704-4e94-af43-82b629145b59" (UID: "278e84bc-0704-4e94-af43-82b629145b59"). InnerVolumeSpecName "kube-api-access-k29q8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:15:57 crc kubenswrapper[4619]: I0126 11:15:57.714961 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/278e84bc-0704-4e94-af43-82b629145b59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "278e84bc-0704-4e94-af43-82b629145b59" (UID: "278e84bc-0704-4e94-af43-82b629145b59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:57 crc kubenswrapper[4619]: I0126 11:15:57.743051 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/278e84bc-0704-4e94-af43-82b629145b59-config-data" (OuterVolumeSpecName: "config-data") pod "278e84bc-0704-4e94-af43-82b629145b59" (UID: "278e84bc-0704-4e94-af43-82b629145b59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:15:57 crc kubenswrapper[4619]: I0126 11:15:57.784059 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/278e84bc-0704-4e94-af43-82b629145b59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:57 crc kubenswrapper[4619]: I0126 11:15:57.784093 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k29q8\" (UniqueName: \"kubernetes.io/projected/278e84bc-0704-4e94-af43-82b629145b59-kube-api-access-k29q8\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:57 crc kubenswrapper[4619]: I0126 11:15:57.784103 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/278e84bc-0704-4e94-af43-82b629145b59-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:15:57 crc kubenswrapper[4619]: I0126 11:15:57.970235 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 11:15:57 crc kubenswrapper[4619]: I0126 11:15:57.980108 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 11:15:57 crc kubenswrapper[4619]: I0126 11:15:57.994043 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 11:15:57 crc kubenswrapper[4619]: E0126 11:15:57.994466 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="278e84bc-0704-4e94-af43-82b629145b59" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 11:15:57 crc kubenswrapper[4619]: I0126 11:15:57.994489 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="278e84bc-0704-4e94-af43-82b629145b59" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 11:15:57 crc kubenswrapper[4619]: I0126 11:15:57.994689 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="278e84bc-0704-4e94-af43-82b629145b59" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 11:15:57 crc kubenswrapper[4619]: I0126 11:15:57.995303 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:15:58 crc kubenswrapper[4619]: I0126 11:15:58.001096 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 26 11:15:58 crc kubenswrapper[4619]: I0126 11:15:58.004077 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 26 11:15:58 crc kubenswrapper[4619]: I0126 11:15:58.004475 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 26 11:15:58 crc kubenswrapper[4619]: I0126 11:15:58.005541 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 11:15:58 crc kubenswrapper[4619]: I0126 11:15:58.090304 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c34909b-2fd9-4e80-b0ef-9dbf87382ee7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c34909b-2fd9-4e80-b0ef-9dbf87382ee7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:15:58 crc kubenswrapper[4619]: I0126 11:15:58.090335 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z4sb\" (UniqueName: \"kubernetes.io/projected/2c34909b-2fd9-4e80-b0ef-9dbf87382ee7-kube-api-access-7z4sb\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c34909b-2fd9-4e80-b0ef-9dbf87382ee7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:15:58 crc kubenswrapper[4619]: I0126 11:15:58.090372 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c34909b-2fd9-4e80-b0ef-9dbf87382ee7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c34909b-2fd9-4e80-b0ef-9dbf87382ee7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:15:58 crc kubenswrapper[4619]: I0126 11:15:58.090404 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c34909b-2fd9-4e80-b0ef-9dbf87382ee7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c34909b-2fd9-4e80-b0ef-9dbf87382ee7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:15:58 crc kubenswrapper[4619]: I0126 11:15:58.090459 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c34909b-2fd9-4e80-b0ef-9dbf87382ee7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c34909b-2fd9-4e80-b0ef-9dbf87382ee7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:15:58 crc kubenswrapper[4619]: I0126 11:15:58.192550 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z4sb\" (UniqueName: \"kubernetes.io/projected/2c34909b-2fd9-4e80-b0ef-9dbf87382ee7-kube-api-access-7z4sb\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c34909b-2fd9-4e80-b0ef-9dbf87382ee7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:15:58 crc kubenswrapper[4619]: I0126 11:15:58.192984 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c34909b-2fd9-4e80-b0ef-9dbf87382ee7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c34909b-2fd9-4e80-b0ef-9dbf87382ee7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:15:58 crc kubenswrapper[4619]: I0126 11:15:58.193140 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c34909b-2fd9-4e80-b0ef-9dbf87382ee7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c34909b-2fd9-4e80-b0ef-9dbf87382ee7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:15:58 crc kubenswrapper[4619]: I0126 11:15:58.193279 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c34909b-2fd9-4e80-b0ef-9dbf87382ee7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c34909b-2fd9-4e80-b0ef-9dbf87382ee7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:15:58 crc kubenswrapper[4619]: I0126 11:15:58.193441 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c34909b-2fd9-4e80-b0ef-9dbf87382ee7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c34909b-2fd9-4e80-b0ef-9dbf87382ee7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:15:58 crc kubenswrapper[4619]: I0126 11:15:58.197467 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c34909b-2fd9-4e80-b0ef-9dbf87382ee7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c34909b-2fd9-4e80-b0ef-9dbf87382ee7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:15:58 crc kubenswrapper[4619]: I0126 11:15:58.198400 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c34909b-2fd9-4e80-b0ef-9dbf87382ee7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c34909b-2fd9-4e80-b0ef-9dbf87382ee7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:15:58 crc kubenswrapper[4619]: I0126 11:15:58.198686 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c34909b-2fd9-4e80-b0ef-9dbf87382ee7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c34909b-2fd9-4e80-b0ef-9dbf87382ee7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:15:58 crc kubenswrapper[4619]: I0126 11:15:58.199061 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c34909b-2fd9-4e80-b0ef-9dbf87382ee7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c34909b-2fd9-4e80-b0ef-9dbf87382ee7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:15:58 crc kubenswrapper[4619]: I0126 11:15:58.211268 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z4sb\" (UniqueName: \"kubernetes.io/projected/2c34909b-2fd9-4e80-b0ef-9dbf87382ee7-kube-api-access-7z4sb\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c34909b-2fd9-4e80-b0ef-9dbf87382ee7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:15:58 crc kubenswrapper[4619]: I0126 11:15:58.318436 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:15:58 crc kubenswrapper[4619]: I0126 11:15:58.805574 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 11:15:58 crc kubenswrapper[4619]: W0126 11:15:58.816544 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c34909b_2fd9_4e80_b0ef_9dbf87382ee7.slice/crio-05df9259451add03d4012f914c2c8a3047827931ffe5e2491d35ea7dcd4e2841 WatchSource:0}: Error finding container 05df9259451add03d4012f914c2c8a3047827931ffe5e2491d35ea7dcd4e2841: Status 404 returned error can't find the container with id 05df9259451add03d4012f914c2c8a3047827931ffe5e2491d35ea7dcd4e2841 Jan 26 11:15:59 crc kubenswrapper[4619]: I0126 11:15:59.271125 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="278e84bc-0704-4e94-af43-82b629145b59" path="/var/lib/kubelet/pods/278e84bc-0704-4e94-af43-82b629145b59/volumes" Jan 26 11:15:59 crc kubenswrapper[4619]: I0126 11:15:59.640921 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2c34909b-2fd9-4e80-b0ef-9dbf87382ee7","Type":"ContainerStarted","Data":"7d4e3f1aad428ccd1f99f796a355c691ef0a2d3df8e0e3a8cebea05783d8d92b"} Jan 26 11:15:59 crc kubenswrapper[4619]: I0126 11:15:59.641316 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2c34909b-2fd9-4e80-b0ef-9dbf87382ee7","Type":"ContainerStarted","Data":"05df9259451add03d4012f914c2c8a3047827931ffe5e2491d35ea7dcd4e2841"} Jan 26 11:15:59 crc kubenswrapper[4619]: I0126 11:15:59.673933 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.673910884 podStartE2EDuration="2.673910884s" podCreationTimestamp="2026-01-26 11:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:15:59.669260479 +0000 UTC m=+1258.703301205" watchObservedRunningTime="2026-01-26 11:15:59.673910884 +0000 UTC m=+1258.707951600" Jan 26 11:16:00 crc kubenswrapper[4619]: I0126 11:16:00.056714 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 11:16:00 crc kubenswrapper[4619]: I0126 11:16:00.057639 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 11:16:00 crc kubenswrapper[4619]: I0126 11:16:00.057779 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 11:16:00 crc kubenswrapper[4619]: I0126 11:16:00.063237 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 11:16:00 crc kubenswrapper[4619]: I0126 11:16:00.674029 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 11:16:00 crc kubenswrapper[4619]: I0126 11:16:00.679346 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 11:16:00 crc kubenswrapper[4619]: I0126 11:16:00.912324 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-rkvf5"] Jan 26 11:16:00 crc kubenswrapper[4619]: I0126 11:16:00.913747 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-rkvf5" Jan 26 11:16:00 crc kubenswrapper[4619]: I0126 11:16:00.937158 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-rkvf5"] Jan 26 11:16:01 crc kubenswrapper[4619]: I0126 11:16:01.047023 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88977f8b-7824-4631-b531-45c5baf76787-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-rkvf5\" (UID: \"88977f8b-7824-4631-b531-45c5baf76787\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rkvf5" Jan 26 11:16:01 crc kubenswrapper[4619]: I0126 11:16:01.047091 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88977f8b-7824-4631-b531-45c5baf76787-config\") pod \"dnsmasq-dns-89c5cd4d5-rkvf5\" (UID: \"88977f8b-7824-4631-b531-45c5baf76787\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rkvf5" Jan 26 11:16:01 crc kubenswrapper[4619]: I0126 11:16:01.047449 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88977f8b-7824-4631-b531-45c5baf76787-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-rkvf5\" (UID: \"88977f8b-7824-4631-b531-45c5baf76787\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rkvf5" Jan 26 11:16:01 crc kubenswrapper[4619]: I0126 11:16:01.047497 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xncc5\" (UniqueName: \"kubernetes.io/projected/88977f8b-7824-4631-b531-45c5baf76787-kube-api-access-xncc5\") pod \"dnsmasq-dns-89c5cd4d5-rkvf5\" (UID: \"88977f8b-7824-4631-b531-45c5baf76787\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rkvf5" Jan 26 11:16:01 crc kubenswrapper[4619]: I0126 11:16:01.047537 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/88977f8b-7824-4631-b531-45c5baf76787-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-rkvf5\" (UID: \"88977f8b-7824-4631-b531-45c5baf76787\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rkvf5" Jan 26 11:16:01 crc kubenswrapper[4619]: I0126 11:16:01.047575 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88977f8b-7824-4631-b531-45c5baf76787-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-rkvf5\" (UID: \"88977f8b-7824-4631-b531-45c5baf76787\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rkvf5" Jan 26 11:16:01 crc kubenswrapper[4619]: I0126 11:16:01.149231 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88977f8b-7824-4631-b531-45c5baf76787-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-rkvf5\" (UID: \"88977f8b-7824-4631-b531-45c5baf76787\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rkvf5" Jan 26 11:16:01 crc kubenswrapper[4619]: I0126 11:16:01.149303 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xncc5\" (UniqueName: \"kubernetes.io/projected/88977f8b-7824-4631-b531-45c5baf76787-kube-api-access-xncc5\") pod \"dnsmasq-dns-89c5cd4d5-rkvf5\" (UID: \"88977f8b-7824-4631-b531-45c5baf76787\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rkvf5" Jan 26 11:16:01 crc kubenswrapper[4619]: I0126 11:16:01.149337 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/88977f8b-7824-4631-b531-45c5baf76787-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-rkvf5\" (UID: \"88977f8b-7824-4631-b531-45c5baf76787\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rkvf5" Jan 26 11:16:01 crc kubenswrapper[4619]: I0126 11:16:01.149364 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88977f8b-7824-4631-b531-45c5baf76787-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-rkvf5\" (UID: \"88977f8b-7824-4631-b531-45c5baf76787\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rkvf5" Jan 26 11:16:01 crc kubenswrapper[4619]: I0126 11:16:01.149454 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88977f8b-7824-4631-b531-45c5baf76787-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-rkvf5\" (UID: \"88977f8b-7824-4631-b531-45c5baf76787\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rkvf5" Jan 26 11:16:01 crc kubenswrapper[4619]: I0126 11:16:01.149510 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88977f8b-7824-4631-b531-45c5baf76787-config\") pod \"dnsmasq-dns-89c5cd4d5-rkvf5\" (UID: \"88977f8b-7824-4631-b531-45c5baf76787\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rkvf5" Jan 26 11:16:01 crc kubenswrapper[4619]: I0126 11:16:01.150268 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88977f8b-7824-4631-b531-45c5baf76787-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-rkvf5\" (UID: \"88977f8b-7824-4631-b531-45c5baf76787\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rkvf5" Jan 26 11:16:01 crc kubenswrapper[4619]: I0126 11:16:01.150288 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/88977f8b-7824-4631-b531-45c5baf76787-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-rkvf5\" (UID: \"88977f8b-7824-4631-b531-45c5baf76787\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rkvf5" Jan 26 11:16:01 crc kubenswrapper[4619]: I0126 11:16:01.150510 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88977f8b-7824-4631-b531-45c5baf76787-config\") pod \"dnsmasq-dns-89c5cd4d5-rkvf5\" (UID: \"88977f8b-7824-4631-b531-45c5baf76787\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rkvf5" Jan 26 11:16:01 crc kubenswrapper[4619]: I0126 11:16:01.150535 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88977f8b-7824-4631-b531-45c5baf76787-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-rkvf5\" (UID: \"88977f8b-7824-4631-b531-45c5baf76787\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rkvf5" Jan 26 11:16:01 crc kubenswrapper[4619]: I0126 11:16:01.151217 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88977f8b-7824-4631-b531-45c5baf76787-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-rkvf5\" (UID: \"88977f8b-7824-4631-b531-45c5baf76787\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rkvf5" Jan 26 11:16:01 crc kubenswrapper[4619]: I0126 11:16:01.169051 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xncc5\" (UniqueName: \"kubernetes.io/projected/88977f8b-7824-4631-b531-45c5baf76787-kube-api-access-xncc5\") pod \"dnsmasq-dns-89c5cd4d5-rkvf5\" (UID: \"88977f8b-7824-4631-b531-45c5baf76787\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rkvf5" Jan 26 11:16:01 crc kubenswrapper[4619]: I0126 11:16:01.279864 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-rkvf5" Jan 26 11:16:01 crc kubenswrapper[4619]: I0126 11:16:01.752188 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-rkvf5"] Jan 26 11:16:02 crc kubenswrapper[4619]: I0126 11:16:02.687114 4619 generic.go:334] "Generic (PLEG): container finished" podID="88977f8b-7824-4631-b531-45c5baf76787" containerID="6e74af164c9fcfa79054b6642723a37eed2507e1b75b7f332161d6bd2baf999d" exitCode=0 Jan 26 11:16:02 crc kubenswrapper[4619]: I0126 11:16:02.687262 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-rkvf5" event={"ID":"88977f8b-7824-4631-b531-45c5baf76787","Type":"ContainerDied","Data":"6e74af164c9fcfa79054b6642723a37eed2507e1b75b7f332161d6bd2baf999d"} Jan 26 11:16:02 crc kubenswrapper[4619]: I0126 11:16:02.688025 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-rkvf5" event={"ID":"88977f8b-7824-4631-b531-45c5baf76787","Type":"ContainerStarted","Data":"64d768a2fb8b2ed88af78819c4b0117d36a5c597588d4ff05ae1f4b77945ec8a"} Jan 26 11:16:03 crc kubenswrapper[4619]: I0126 11:16:03.294055 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:16:03 crc kubenswrapper[4619]: I0126 11:16:03.294576 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ba231012-5964-4056-812c-8e0991a03b1d" containerName="ceilometer-central-agent" containerID="cri-o://e29e8d6c8623c88cf3a9326807acff2c51035b011dc0ab1268cea528b615b66d" gracePeriod=30 Jan 26 11:16:03 crc kubenswrapper[4619]: I0126 11:16:03.294714 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ba231012-5964-4056-812c-8e0991a03b1d" containerName="proxy-httpd" containerID="cri-o://f677017898647b96b295ed81a2f14a98db92cc3d5a0fdf20881e967ae12afdfa" gracePeriod=30 Jan 26 11:16:03 crc kubenswrapper[4619]: I0126 11:16:03.294751 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ba231012-5964-4056-812c-8e0991a03b1d" containerName="sg-core" containerID="cri-o://3f6146b7cb7213e683a812becd12f01a045743fec84fe668efbec7b7e1802053" gracePeriod=30 Jan 26 11:16:03 crc kubenswrapper[4619]: I0126 11:16:03.294780 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ba231012-5964-4056-812c-8e0991a03b1d" containerName="ceilometer-notification-agent" containerID="cri-o://d00cb79f0a40b6bf1692d7d9296ddf04f0ce95ee3d304997e7e19820be54b213" gracePeriod=30 Jan 26 11:16:03 crc kubenswrapper[4619]: I0126 11:16:03.301133 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 11:16:03 crc kubenswrapper[4619]: I0126 11:16:03.322270 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:16:03 crc kubenswrapper[4619]: I0126 11:16:03.661644 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:16:03 crc kubenswrapper[4619]: I0126 11:16:03.698803 4619 generic.go:334] "Generic (PLEG): container finished" podID="ba231012-5964-4056-812c-8e0991a03b1d" containerID="f677017898647b96b295ed81a2f14a98db92cc3d5a0fdf20881e967ae12afdfa" exitCode=0 Jan 26 11:16:03 crc kubenswrapper[4619]: I0126 11:16:03.698855 4619 generic.go:334] "Generic (PLEG): container finished" podID="ba231012-5964-4056-812c-8e0991a03b1d" containerID="3f6146b7cb7213e683a812becd12f01a045743fec84fe668efbec7b7e1802053" exitCode=2 Jan 26 11:16:03 crc kubenswrapper[4619]: I0126 11:16:03.698901 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba231012-5964-4056-812c-8e0991a03b1d","Type":"ContainerDied","Data":"f677017898647b96b295ed81a2f14a98db92cc3d5a0fdf20881e967ae12afdfa"} Jan 26 11:16:03 crc kubenswrapper[4619]: I0126 11:16:03.698955 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba231012-5964-4056-812c-8e0991a03b1d","Type":"ContainerDied","Data":"3f6146b7cb7213e683a812becd12f01a045743fec84fe668efbec7b7e1802053"} Jan 26 11:16:03 crc kubenswrapper[4619]: I0126 11:16:03.702319 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-rkvf5" event={"ID":"88977f8b-7824-4631-b531-45c5baf76787","Type":"ContainerStarted","Data":"c9fe315c56abd47ad944a21404f33af2e420dea871b8ddabb67f5fcf33467e27"} Jan 26 11:16:03 crc kubenswrapper[4619]: I0126 11:16:03.702553 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b19b268b-c1eb-4093-87b8-f69e4d95c9d3" containerName="nova-api-log" containerID="cri-o://2f601cd67188ff54b9b82b435fcab41b5b7a08056ddbd1e8c684cb5a6d715731" gracePeriod=30 Jan 26 11:16:03 crc kubenswrapper[4619]: I0126 11:16:03.702593 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b19b268b-c1eb-4093-87b8-f69e4d95c9d3" containerName="nova-api-api" containerID="cri-o://2b98f9361ce497323075fe5d4e783822f7a4dfb3541cb9bffc399eb67752c788" gracePeriod=30 Jan 26 11:16:03 crc kubenswrapper[4619]: I0126 11:16:03.702740 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-rkvf5" Jan 26 11:16:03 crc kubenswrapper[4619]: I0126 11:16:03.730744 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-rkvf5" podStartSLOduration=3.7307282280000003 podStartE2EDuration="3.730728228s" podCreationTimestamp="2026-01-26 11:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:16:03.726339511 +0000 UTC m=+1262.760380227" watchObservedRunningTime="2026-01-26 11:16:03.730728228 +0000 UTC m=+1262.764768944" Jan 26 11:16:04 crc kubenswrapper[4619]: I0126 11:16:04.716869 4619 generic.go:334] "Generic (PLEG): container finished" podID="ba231012-5964-4056-812c-8e0991a03b1d" containerID="e29e8d6c8623c88cf3a9326807acff2c51035b011dc0ab1268cea528b615b66d" exitCode=0 Jan 26 11:16:04 crc kubenswrapper[4619]: I0126 11:16:04.716948 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba231012-5964-4056-812c-8e0991a03b1d","Type":"ContainerDied","Data":"e29e8d6c8623c88cf3a9326807acff2c51035b011dc0ab1268cea528b615b66d"} Jan 26 11:16:04 crc kubenswrapper[4619]: I0126 11:16:04.720059 4619 generic.go:334] "Generic (PLEG): container finished" podID="b19b268b-c1eb-4093-87b8-f69e4d95c9d3" containerID="2f601cd67188ff54b9b82b435fcab41b5b7a08056ddbd1e8c684cb5a6d715731" exitCode=143 Jan 26 11:16:04 crc kubenswrapper[4619]: I0126 11:16:04.720571 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b19b268b-c1eb-4093-87b8-f69e4d95c9d3","Type":"ContainerDied","Data":"2f601cd67188ff54b9b82b435fcab41b5b7a08056ddbd1e8c684cb5a6d715731"} Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.114307 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.124921 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba231012-5964-4056-812c-8e0991a03b1d-config-data\") pod \"ba231012-5964-4056-812c-8e0991a03b1d\" (UID: \"ba231012-5964-4056-812c-8e0991a03b1d\") " Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.125057 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba231012-5964-4056-812c-8e0991a03b1d-combined-ca-bundle\") pod \"ba231012-5964-4056-812c-8e0991a03b1d\" (UID: \"ba231012-5964-4056-812c-8e0991a03b1d\") " Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.125201 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba231012-5964-4056-812c-8e0991a03b1d-sg-core-conf-yaml\") pod \"ba231012-5964-4056-812c-8e0991a03b1d\" (UID: \"ba231012-5964-4056-812c-8e0991a03b1d\") " Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.125275 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2dcf\" (UniqueName: \"kubernetes.io/projected/ba231012-5964-4056-812c-8e0991a03b1d-kube-api-access-t2dcf\") pod \"ba231012-5964-4056-812c-8e0991a03b1d\" (UID: \"ba231012-5964-4056-812c-8e0991a03b1d\") " Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.125327 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba231012-5964-4056-812c-8e0991a03b1d-log-httpd\") pod \"ba231012-5964-4056-812c-8e0991a03b1d\" (UID: \"ba231012-5964-4056-812c-8e0991a03b1d\") " Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.125359 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba231012-5964-4056-812c-8e0991a03b1d-scripts\") pod \"ba231012-5964-4056-812c-8e0991a03b1d\" (UID: \"ba231012-5964-4056-812c-8e0991a03b1d\") " Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.125396 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba231012-5964-4056-812c-8e0991a03b1d-run-httpd\") pod \"ba231012-5964-4056-812c-8e0991a03b1d\" (UID: \"ba231012-5964-4056-812c-8e0991a03b1d\") " Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.126146 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba231012-5964-4056-812c-8e0991a03b1d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ba231012-5964-4056-812c-8e0991a03b1d" (UID: "ba231012-5964-4056-812c-8e0991a03b1d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.126227 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba231012-5964-4056-812c-8e0991a03b1d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ba231012-5964-4056-812c-8e0991a03b1d" (UID: "ba231012-5964-4056-812c-8e0991a03b1d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.138841 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba231012-5964-4056-812c-8e0991a03b1d-scripts" (OuterVolumeSpecName: "scripts") pod "ba231012-5964-4056-812c-8e0991a03b1d" (UID: "ba231012-5964-4056-812c-8e0991a03b1d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.150794 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba231012-5964-4056-812c-8e0991a03b1d-kube-api-access-t2dcf" (OuterVolumeSpecName: "kube-api-access-t2dcf") pod "ba231012-5964-4056-812c-8e0991a03b1d" (UID: "ba231012-5964-4056-812c-8e0991a03b1d"). InnerVolumeSpecName "kube-api-access-t2dcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.201768 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba231012-5964-4056-812c-8e0991a03b1d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ba231012-5964-4056-812c-8e0991a03b1d" (UID: "ba231012-5964-4056-812c-8e0991a03b1d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.227206 4619 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba231012-5964-4056-812c-8e0991a03b1d-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.227359 4619 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba231012-5964-4056-812c-8e0991a03b1d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.227419 4619 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba231012-5964-4056-812c-8e0991a03b1d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.227476 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2dcf\" (UniqueName: \"kubernetes.io/projected/ba231012-5964-4056-812c-8e0991a03b1d-kube-api-access-t2dcf\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.227529 4619 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba231012-5964-4056-812c-8e0991a03b1d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.286445 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba231012-5964-4056-812c-8e0991a03b1d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba231012-5964-4056-812c-8e0991a03b1d" (UID: "ba231012-5964-4056-812c-8e0991a03b1d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.303086 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba231012-5964-4056-812c-8e0991a03b1d-config-data" (OuterVolumeSpecName: "config-data") pod "ba231012-5964-4056-812c-8e0991a03b1d" (UID: "ba231012-5964-4056-812c-8e0991a03b1d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.329876 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba231012-5964-4056-812c-8e0991a03b1d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.329902 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba231012-5964-4056-812c-8e0991a03b1d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.734731 4619 generic.go:334] "Generic (PLEG): container finished" podID="ba231012-5964-4056-812c-8e0991a03b1d" containerID="d00cb79f0a40b6bf1692d7d9296ddf04f0ce95ee3d304997e7e19820be54b213" exitCode=0 Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.734791 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.734811 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba231012-5964-4056-812c-8e0991a03b1d","Type":"ContainerDied","Data":"d00cb79f0a40b6bf1692d7d9296ddf04f0ce95ee3d304997e7e19820be54b213"} Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.735172 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba231012-5964-4056-812c-8e0991a03b1d","Type":"ContainerDied","Data":"e3905d6532c2ce9385dae72d1a72869197491706d903736517b69f73441be01e"} Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.735450 4619 scope.go:117] "RemoveContainer" containerID="f677017898647b96b295ed81a2f14a98db92cc3d5a0fdf20881e967ae12afdfa" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.780818 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.785573 4619 scope.go:117] "RemoveContainer" containerID="3f6146b7cb7213e683a812becd12f01a045743fec84fe668efbec7b7e1802053" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.788999 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.813134 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:16:05 crc kubenswrapper[4619]: E0126 11:16:05.818038 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba231012-5964-4056-812c-8e0991a03b1d" containerName="sg-core" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.818078 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba231012-5964-4056-812c-8e0991a03b1d" containerName="sg-core" Jan 26 11:16:05 crc kubenswrapper[4619]: E0126 11:16:05.818095 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba231012-5964-4056-812c-8e0991a03b1d" containerName="ceilometer-central-agent" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.818102 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba231012-5964-4056-812c-8e0991a03b1d" containerName="ceilometer-central-agent" Jan 26 11:16:05 crc kubenswrapper[4619]: E0126 11:16:05.818115 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba231012-5964-4056-812c-8e0991a03b1d" containerName="proxy-httpd" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.818122 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba231012-5964-4056-812c-8e0991a03b1d" containerName="proxy-httpd" Jan 26 11:16:05 crc kubenswrapper[4619]: E0126 11:16:05.818154 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba231012-5964-4056-812c-8e0991a03b1d" containerName="ceilometer-notification-agent" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.818160 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba231012-5964-4056-812c-8e0991a03b1d" containerName="ceilometer-notification-agent" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.818310 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba231012-5964-4056-812c-8e0991a03b1d" containerName="ceilometer-central-agent" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.818326 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba231012-5964-4056-812c-8e0991a03b1d" containerName="sg-core" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.818349 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba231012-5964-4056-812c-8e0991a03b1d" containerName="ceilometer-notification-agent" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.818358 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba231012-5964-4056-812c-8e0991a03b1d" containerName="proxy-httpd" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.819954 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.822980 4619 scope.go:117] "RemoveContainer" containerID="d00cb79f0a40b6bf1692d7d9296ddf04f0ce95ee3d304997e7e19820be54b213" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.823576 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.824249 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.831523 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.839644 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qggm2\" (UniqueName: \"kubernetes.io/projected/372ae209-527d-4911-a2fe-c44eb520a653-kube-api-access-qggm2\") pod \"ceilometer-0\" (UID: \"372ae209-527d-4911-a2fe-c44eb520a653\") " pod="openstack/ceilometer-0" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.839680 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/372ae209-527d-4911-a2fe-c44eb520a653-scripts\") pod \"ceilometer-0\" (UID: \"372ae209-527d-4911-a2fe-c44eb520a653\") " pod="openstack/ceilometer-0" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.839793 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/372ae209-527d-4911-a2fe-c44eb520a653-log-httpd\") pod \"ceilometer-0\" (UID: \"372ae209-527d-4911-a2fe-c44eb520a653\") " pod="openstack/ceilometer-0" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.839867 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/372ae209-527d-4911-a2fe-c44eb520a653-config-data\") pod \"ceilometer-0\" (UID: \"372ae209-527d-4911-a2fe-c44eb520a653\") " pod="openstack/ceilometer-0" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.839888 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/372ae209-527d-4911-a2fe-c44eb520a653-run-httpd\") pod \"ceilometer-0\" (UID: \"372ae209-527d-4911-a2fe-c44eb520a653\") " pod="openstack/ceilometer-0" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.839908 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/372ae209-527d-4911-a2fe-c44eb520a653-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"372ae209-527d-4911-a2fe-c44eb520a653\") " pod="openstack/ceilometer-0" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.839926 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372ae209-527d-4911-a2fe-c44eb520a653-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"372ae209-527d-4911-a2fe-c44eb520a653\") " pod="openstack/ceilometer-0" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.849240 4619 scope.go:117] "RemoveContainer" containerID="e29e8d6c8623c88cf3a9326807acff2c51035b011dc0ab1268cea528b615b66d" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.884174 4619 scope.go:117] "RemoveContainer" containerID="f677017898647b96b295ed81a2f14a98db92cc3d5a0fdf20881e967ae12afdfa" Jan 26 11:16:05 crc kubenswrapper[4619]: E0126 11:16:05.884705 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f677017898647b96b295ed81a2f14a98db92cc3d5a0fdf20881e967ae12afdfa\": container with ID starting with f677017898647b96b295ed81a2f14a98db92cc3d5a0fdf20881e967ae12afdfa not found: ID does not exist" containerID="f677017898647b96b295ed81a2f14a98db92cc3d5a0fdf20881e967ae12afdfa" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.884750 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f677017898647b96b295ed81a2f14a98db92cc3d5a0fdf20881e967ae12afdfa"} err="failed to get container status \"f677017898647b96b295ed81a2f14a98db92cc3d5a0fdf20881e967ae12afdfa\": rpc error: code = NotFound desc = could not find container \"f677017898647b96b295ed81a2f14a98db92cc3d5a0fdf20881e967ae12afdfa\": container with ID starting with f677017898647b96b295ed81a2f14a98db92cc3d5a0fdf20881e967ae12afdfa not found: ID does not exist" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.884789 4619 scope.go:117] "RemoveContainer" containerID="3f6146b7cb7213e683a812becd12f01a045743fec84fe668efbec7b7e1802053" Jan 26 11:16:05 crc kubenswrapper[4619]: E0126 11:16:05.885102 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f6146b7cb7213e683a812becd12f01a045743fec84fe668efbec7b7e1802053\": container with ID starting with 3f6146b7cb7213e683a812becd12f01a045743fec84fe668efbec7b7e1802053 not found: ID does not exist" containerID="3f6146b7cb7213e683a812becd12f01a045743fec84fe668efbec7b7e1802053" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.885132 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f6146b7cb7213e683a812becd12f01a045743fec84fe668efbec7b7e1802053"} err="failed to get container status \"3f6146b7cb7213e683a812becd12f01a045743fec84fe668efbec7b7e1802053\": rpc error: code = NotFound desc = could not find container \"3f6146b7cb7213e683a812becd12f01a045743fec84fe668efbec7b7e1802053\": container with ID starting with 3f6146b7cb7213e683a812becd12f01a045743fec84fe668efbec7b7e1802053 not found: ID does not exist" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.885148 4619 scope.go:117] "RemoveContainer" containerID="d00cb79f0a40b6bf1692d7d9296ddf04f0ce95ee3d304997e7e19820be54b213" Jan 26 11:16:05 crc kubenswrapper[4619]: E0126 11:16:05.885399 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d00cb79f0a40b6bf1692d7d9296ddf04f0ce95ee3d304997e7e19820be54b213\": container with ID starting with d00cb79f0a40b6bf1692d7d9296ddf04f0ce95ee3d304997e7e19820be54b213 not found: ID does not exist" containerID="d00cb79f0a40b6bf1692d7d9296ddf04f0ce95ee3d304997e7e19820be54b213" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.885422 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d00cb79f0a40b6bf1692d7d9296ddf04f0ce95ee3d304997e7e19820be54b213"} err="failed to get container status \"d00cb79f0a40b6bf1692d7d9296ddf04f0ce95ee3d304997e7e19820be54b213\": rpc error: code = NotFound desc = could not find container \"d00cb79f0a40b6bf1692d7d9296ddf04f0ce95ee3d304997e7e19820be54b213\": container with ID starting with d00cb79f0a40b6bf1692d7d9296ddf04f0ce95ee3d304997e7e19820be54b213 not found: ID does not exist" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.885435 4619 scope.go:117] "RemoveContainer" containerID="e29e8d6c8623c88cf3a9326807acff2c51035b011dc0ab1268cea528b615b66d" Jan 26 11:16:05 crc kubenswrapper[4619]: E0126 11:16:05.885635 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e29e8d6c8623c88cf3a9326807acff2c51035b011dc0ab1268cea528b615b66d\": container with ID starting with e29e8d6c8623c88cf3a9326807acff2c51035b011dc0ab1268cea528b615b66d not found: ID does not exist" containerID="e29e8d6c8623c88cf3a9326807acff2c51035b011dc0ab1268cea528b615b66d" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.885657 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e29e8d6c8623c88cf3a9326807acff2c51035b011dc0ab1268cea528b615b66d"} err="failed to get container status \"e29e8d6c8623c88cf3a9326807acff2c51035b011dc0ab1268cea528b615b66d\": rpc error: code = NotFound desc = could not find container \"e29e8d6c8623c88cf3a9326807acff2c51035b011dc0ab1268cea528b615b66d\": container with ID starting with e29e8d6c8623c88cf3a9326807acff2c51035b011dc0ab1268cea528b615b66d not found: ID does not exist" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.941100 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372ae209-527d-4911-a2fe-c44eb520a653-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"372ae209-527d-4911-a2fe-c44eb520a653\") " pod="openstack/ceilometer-0" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.941275 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qggm2\" (UniqueName: \"kubernetes.io/projected/372ae209-527d-4911-a2fe-c44eb520a653-kube-api-access-qggm2\") pod \"ceilometer-0\" (UID: \"372ae209-527d-4911-a2fe-c44eb520a653\") " pod="openstack/ceilometer-0" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.941302 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/372ae209-527d-4911-a2fe-c44eb520a653-scripts\") pod \"ceilometer-0\" (UID: \"372ae209-527d-4911-a2fe-c44eb520a653\") " pod="openstack/ceilometer-0" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.942868 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/372ae209-527d-4911-a2fe-c44eb520a653-log-httpd\") pod \"ceilometer-0\" (UID: \"372ae209-527d-4911-a2fe-c44eb520a653\") " pod="openstack/ceilometer-0" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.952351 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/372ae209-527d-4911-a2fe-c44eb520a653-scripts\") pod \"ceilometer-0\" (UID: \"372ae209-527d-4911-a2fe-c44eb520a653\") " pod="openstack/ceilometer-0" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.953509 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372ae209-527d-4911-a2fe-c44eb520a653-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"372ae209-527d-4911-a2fe-c44eb520a653\") " pod="openstack/ceilometer-0" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.953770 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/372ae209-527d-4911-a2fe-c44eb520a653-log-httpd\") pod \"ceilometer-0\" (UID: \"372ae209-527d-4911-a2fe-c44eb520a653\") " pod="openstack/ceilometer-0" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.954075 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/372ae209-527d-4911-a2fe-c44eb520a653-config-data\") pod \"ceilometer-0\" (UID: \"372ae209-527d-4911-a2fe-c44eb520a653\") " pod="openstack/ceilometer-0" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.954107 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/372ae209-527d-4911-a2fe-c44eb520a653-run-httpd\") pod \"ceilometer-0\" (UID: \"372ae209-527d-4911-a2fe-c44eb520a653\") " pod="openstack/ceilometer-0" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.954514 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/372ae209-527d-4911-a2fe-c44eb520a653-run-httpd\") pod \"ceilometer-0\" (UID: \"372ae209-527d-4911-a2fe-c44eb520a653\") " pod="openstack/ceilometer-0" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.954576 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/372ae209-527d-4911-a2fe-c44eb520a653-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"372ae209-527d-4911-a2fe-c44eb520a653\") " pod="openstack/ceilometer-0" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.957372 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qggm2\" (UniqueName: \"kubernetes.io/projected/372ae209-527d-4911-a2fe-c44eb520a653-kube-api-access-qggm2\") pod \"ceilometer-0\" (UID: \"372ae209-527d-4911-a2fe-c44eb520a653\") " pod="openstack/ceilometer-0" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.958897 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/372ae209-527d-4911-a2fe-c44eb520a653-config-data\") pod \"ceilometer-0\" (UID: \"372ae209-527d-4911-a2fe-c44eb520a653\") " pod="openstack/ceilometer-0" Jan 26 11:16:05 crc kubenswrapper[4619]: I0126 11:16:05.960596 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/372ae209-527d-4911-a2fe-c44eb520a653-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"372ae209-527d-4911-a2fe-c44eb520a653\") " pod="openstack/ceilometer-0" Jan 26 11:16:06 crc kubenswrapper[4619]: I0126 11:16:06.135961 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:16:06 crc kubenswrapper[4619]: W0126 11:16:06.683386 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod372ae209_527d_4911_a2fe_c44eb520a653.slice/crio-e32f5546a8ad81e0dcb319186f1c2be9cc32b726d11f2a7c6e51dc933a5ad254 WatchSource:0}: Error finding container e32f5546a8ad81e0dcb319186f1c2be9cc32b726d11f2a7c6e51dc933a5ad254: Status 404 returned error can't find the container with id e32f5546a8ad81e0dcb319186f1c2be9cc32b726d11f2a7c6e51dc933a5ad254 Jan 26 11:16:06 crc kubenswrapper[4619]: I0126 11:16:06.693586 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:16:06 crc kubenswrapper[4619]: I0126 11:16:06.745257 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"372ae209-527d-4911-a2fe-c44eb520a653","Type":"ContainerStarted","Data":"e32f5546a8ad81e0dcb319186f1c2be9cc32b726d11f2a7c6e51dc933a5ad254"} Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.218954 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.272423 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba231012-5964-4056-812c-8e0991a03b1d" path="/var/lib/kubelet/pods/ba231012-5964-4056-812c-8e0991a03b1d/volumes" Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.289093 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b19b268b-c1eb-4093-87b8-f69e4d95c9d3-combined-ca-bundle\") pod \"b19b268b-c1eb-4093-87b8-f69e4d95c9d3\" (UID: \"b19b268b-c1eb-4093-87b8-f69e4d95c9d3\") " Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.289138 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b19b268b-c1eb-4093-87b8-f69e4d95c9d3-config-data\") pod \"b19b268b-c1eb-4093-87b8-f69e4d95c9d3\" (UID: \"b19b268b-c1eb-4093-87b8-f69e4d95c9d3\") " Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.289250 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh5j7\" (UniqueName: \"kubernetes.io/projected/b19b268b-c1eb-4093-87b8-f69e4d95c9d3-kube-api-access-bh5j7\") pod \"b19b268b-c1eb-4093-87b8-f69e4d95c9d3\" (UID: \"b19b268b-c1eb-4093-87b8-f69e4d95c9d3\") " Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.289467 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b19b268b-c1eb-4093-87b8-f69e4d95c9d3-logs\") pod \"b19b268b-c1eb-4093-87b8-f69e4d95c9d3\" (UID: \"b19b268b-c1eb-4093-87b8-f69e4d95c9d3\") " Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.291838 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b19b268b-c1eb-4093-87b8-f69e4d95c9d3-logs" (OuterVolumeSpecName: "logs") pod "b19b268b-c1eb-4093-87b8-f69e4d95c9d3" (UID: "b19b268b-c1eb-4093-87b8-f69e4d95c9d3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.297066 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b19b268b-c1eb-4093-87b8-f69e4d95c9d3-kube-api-access-bh5j7" (OuterVolumeSpecName: "kube-api-access-bh5j7") pod "b19b268b-c1eb-4093-87b8-f69e4d95c9d3" (UID: "b19b268b-c1eb-4093-87b8-f69e4d95c9d3"). InnerVolumeSpecName "kube-api-access-bh5j7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.321273 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b19b268b-c1eb-4093-87b8-f69e4d95c9d3-config-data" (OuterVolumeSpecName: "config-data") pod "b19b268b-c1eb-4093-87b8-f69e4d95c9d3" (UID: "b19b268b-c1eb-4093-87b8-f69e4d95c9d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.343045 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b19b268b-c1eb-4093-87b8-f69e4d95c9d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b19b268b-c1eb-4093-87b8-f69e4d95c9d3" (UID: "b19b268b-c1eb-4093-87b8-f69e4d95c9d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.391639 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bh5j7\" (UniqueName: \"kubernetes.io/projected/b19b268b-c1eb-4093-87b8-f69e4d95c9d3-kube-api-access-bh5j7\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.391852 4619 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b19b268b-c1eb-4093-87b8-f69e4d95c9d3-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.391931 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b19b268b-c1eb-4093-87b8-f69e4d95c9d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.391994 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b19b268b-c1eb-4093-87b8-f69e4d95c9d3-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.758061 4619 generic.go:334] "Generic (PLEG): container finished" podID="b19b268b-c1eb-4093-87b8-f69e4d95c9d3" containerID="2b98f9361ce497323075fe5d4e783822f7a4dfb3541cb9bffc399eb67752c788" exitCode=0 Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.758112 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.758144 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b19b268b-c1eb-4093-87b8-f69e4d95c9d3","Type":"ContainerDied","Data":"2b98f9361ce497323075fe5d4e783822f7a4dfb3541cb9bffc399eb67752c788"} Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.758927 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b19b268b-c1eb-4093-87b8-f69e4d95c9d3","Type":"ContainerDied","Data":"615bd1147b44a40cdcad20c37fd594d9194201e46ea29a6a292fa485ba35ca15"} Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.758949 4619 scope.go:117] "RemoveContainer" containerID="2b98f9361ce497323075fe5d4e783822f7a4dfb3541cb9bffc399eb67752c788" Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.761273 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"372ae209-527d-4911-a2fe-c44eb520a653","Type":"ContainerStarted","Data":"cbbf47c2ca2d14146a5cfb19cbea36d33238ed8fc2e42136fad4b89cc0ccb455"} Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.792207 4619 scope.go:117] "RemoveContainer" containerID="2f601cd67188ff54b9b82b435fcab41b5b7a08056ddbd1e8c684cb5a6d715731" Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.793568 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.843789 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.865571 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 11:16:07 crc kubenswrapper[4619]: E0126 11:16:07.866537 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b19b268b-c1eb-4093-87b8-f69e4d95c9d3" containerName="nova-api-log" Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.866557 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="b19b268b-c1eb-4093-87b8-f69e4d95c9d3" containerName="nova-api-log" Jan 26 11:16:07 crc kubenswrapper[4619]: E0126 11:16:07.866595 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b19b268b-c1eb-4093-87b8-f69e4d95c9d3" containerName="nova-api-api" Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.866601 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="b19b268b-c1eb-4093-87b8-f69e4d95c9d3" containerName="nova-api-api" Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.866799 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="b19b268b-c1eb-4093-87b8-f69e4d95c9d3" containerName="nova-api-log" Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.866815 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="b19b268b-c1eb-4093-87b8-f69e4d95c9d3" containerName="nova-api-api" Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.867774 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.873821 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.875412 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.875639 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.877776 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.902624 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-logs\") pod \"nova-api-0\" (UID: \"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df\") " pod="openstack/nova-api-0" Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.902662 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-public-tls-certs\") pod \"nova-api-0\" (UID: \"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df\") " pod="openstack/nova-api-0" Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.902726 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk9f5\" (UniqueName: \"kubernetes.io/projected/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-kube-api-access-pk9f5\") pod \"nova-api-0\" (UID: \"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df\") " pod="openstack/nova-api-0" Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.902795 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df\") " pod="openstack/nova-api-0" Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.902845 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df\") " pod="openstack/nova-api-0" Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.902876 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-config-data\") pod \"nova-api-0\" (UID: \"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df\") " pod="openstack/nova-api-0" Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.960824 4619 scope.go:117] "RemoveContainer" containerID="2b98f9361ce497323075fe5d4e783822f7a4dfb3541cb9bffc399eb67752c788" Jan 26 11:16:07 crc kubenswrapper[4619]: E0126 11:16:07.961993 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b98f9361ce497323075fe5d4e783822f7a4dfb3541cb9bffc399eb67752c788\": container with ID starting with 2b98f9361ce497323075fe5d4e783822f7a4dfb3541cb9bffc399eb67752c788 not found: ID does not exist" containerID="2b98f9361ce497323075fe5d4e783822f7a4dfb3541cb9bffc399eb67752c788" Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.962035 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b98f9361ce497323075fe5d4e783822f7a4dfb3541cb9bffc399eb67752c788"} err="failed to get container status \"2b98f9361ce497323075fe5d4e783822f7a4dfb3541cb9bffc399eb67752c788\": rpc error: code = NotFound desc = could not find container \"2b98f9361ce497323075fe5d4e783822f7a4dfb3541cb9bffc399eb67752c788\": container with ID starting with 2b98f9361ce497323075fe5d4e783822f7a4dfb3541cb9bffc399eb67752c788 not found: ID does not exist" Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.962066 4619 scope.go:117] "RemoveContainer" containerID="2f601cd67188ff54b9b82b435fcab41b5b7a08056ddbd1e8c684cb5a6d715731" Jan 26 11:16:07 crc kubenswrapper[4619]: E0126 11:16:07.964209 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f601cd67188ff54b9b82b435fcab41b5b7a08056ddbd1e8c684cb5a6d715731\": container with ID starting with 2f601cd67188ff54b9b82b435fcab41b5b7a08056ddbd1e8c684cb5a6d715731 not found: ID does not exist" containerID="2f601cd67188ff54b9b82b435fcab41b5b7a08056ddbd1e8c684cb5a6d715731" Jan 26 11:16:07 crc kubenswrapper[4619]: I0126 11:16:07.964245 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f601cd67188ff54b9b82b435fcab41b5b7a08056ddbd1e8c684cb5a6d715731"} err="failed to get container status \"2f601cd67188ff54b9b82b435fcab41b5b7a08056ddbd1e8c684cb5a6d715731\": rpc error: code = NotFound desc = could not find container \"2f601cd67188ff54b9b82b435fcab41b5b7a08056ddbd1e8c684cb5a6d715731\": container with ID starting with 2f601cd67188ff54b9b82b435fcab41b5b7a08056ddbd1e8c684cb5a6d715731 not found: ID does not exist" Jan 26 11:16:08 crc kubenswrapper[4619]: I0126 11:16:08.006671 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk9f5\" (UniqueName: \"kubernetes.io/projected/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-kube-api-access-pk9f5\") pod \"nova-api-0\" (UID: \"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df\") " pod="openstack/nova-api-0" Jan 26 11:16:08 crc kubenswrapper[4619]: I0126 11:16:08.006757 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df\") " pod="openstack/nova-api-0" Jan 26 11:16:08 crc kubenswrapper[4619]: I0126 11:16:08.006799 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df\") " pod="openstack/nova-api-0" Jan 26 11:16:08 crc kubenswrapper[4619]: I0126 11:16:08.006823 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-config-data\") pod \"nova-api-0\" (UID: \"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df\") " pod="openstack/nova-api-0" Jan 26 11:16:08 crc kubenswrapper[4619]: I0126 11:16:08.006870 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-logs\") pod \"nova-api-0\" (UID: \"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df\") " pod="openstack/nova-api-0" Jan 26 11:16:08 crc kubenswrapper[4619]: I0126 11:16:08.006891 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-public-tls-certs\") pod \"nova-api-0\" (UID: \"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df\") " pod="openstack/nova-api-0" Jan 26 11:16:08 crc kubenswrapper[4619]: I0126 11:16:08.007676 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-logs\") pod \"nova-api-0\" (UID: \"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df\") " pod="openstack/nova-api-0" Jan 26 11:16:08 crc kubenswrapper[4619]: I0126 11:16:08.013220 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-config-data\") pod \"nova-api-0\" (UID: \"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df\") " pod="openstack/nova-api-0" Jan 26 11:16:08 crc kubenswrapper[4619]: I0126 11:16:08.013937 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df\") " pod="openstack/nova-api-0" Jan 26 11:16:08 crc kubenswrapper[4619]: I0126 11:16:08.014274 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-public-tls-certs\") pod \"nova-api-0\" (UID: \"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df\") " pod="openstack/nova-api-0" Jan 26 11:16:08 crc kubenswrapper[4619]: I0126 11:16:08.017146 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df\") " pod="openstack/nova-api-0" Jan 26 11:16:08 crc kubenswrapper[4619]: I0126 11:16:08.024722 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk9f5\" (UniqueName: \"kubernetes.io/projected/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-kube-api-access-pk9f5\") pod \"nova-api-0\" (UID: \"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df\") " pod="openstack/nova-api-0" Jan 26 11:16:08 crc kubenswrapper[4619]: I0126 11:16:08.216116 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 11:16:08 crc kubenswrapper[4619]: I0126 11:16:08.320005 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:16:08 crc kubenswrapper[4619]: I0126 11:16:08.346431 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:16:08 crc kubenswrapper[4619]: W0126 11:16:08.683265 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dc3e4f6_b340_4b39_a830_9a0d4b89d1df.slice/crio-3b88787a7d1463b67b2a852ede73730a39c6d0d3f161afa6b2b0986fc8c99f7c WatchSource:0}: Error finding container 3b88787a7d1463b67b2a852ede73730a39c6d0d3f161afa6b2b0986fc8c99f7c: Status 404 returned error can't find the container with id 3b88787a7d1463b67b2a852ede73730a39c6d0d3f161afa6b2b0986fc8c99f7c Jan 26 11:16:08 crc kubenswrapper[4619]: I0126 11:16:08.688136 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:16:08 crc kubenswrapper[4619]: I0126 11:16:08.772674 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df","Type":"ContainerStarted","Data":"3b88787a7d1463b67b2a852ede73730a39c6d0d3f161afa6b2b0986fc8c99f7c"} Jan 26 11:16:08 crc kubenswrapper[4619]: I0126 11:16:08.776794 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"372ae209-527d-4911-a2fe-c44eb520a653","Type":"ContainerStarted","Data":"a81462f8cbff6adf95eaf21c682d76cdc8b617f3e7000a4a291ff18ccb6e7760"} Jan 26 11:16:08 crc kubenswrapper[4619]: I0126 11:16:08.805455 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:16:08 crc kubenswrapper[4619]: I0126 11:16:08.969557 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-lvbr8"] Jan 26 11:16:08 crc kubenswrapper[4619]: I0126 11:16:08.972041 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lvbr8" Jan 26 11:16:09 crc kubenswrapper[4619]: I0126 11:16:08.994017 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-lvbr8"] Jan 26 11:16:09 crc kubenswrapper[4619]: I0126 11:16:09.006913 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 26 11:16:09 crc kubenswrapper[4619]: I0126 11:16:09.009099 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 26 11:16:09 crc kubenswrapper[4619]: I0126 11:16:09.028508 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f578a895-e882-4e3b-9ef7-3e1d5b14a13f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lvbr8\" (UID: \"f578a895-e882-4e3b-9ef7-3e1d5b14a13f\") " pod="openstack/nova-cell1-cell-mapping-lvbr8" Jan 26 11:16:09 crc kubenswrapper[4619]: I0126 11:16:09.028552 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc8rx\" (UniqueName: \"kubernetes.io/projected/f578a895-e882-4e3b-9ef7-3e1d5b14a13f-kube-api-access-dc8rx\") pod \"nova-cell1-cell-mapping-lvbr8\" (UID: \"f578a895-e882-4e3b-9ef7-3e1d5b14a13f\") " pod="openstack/nova-cell1-cell-mapping-lvbr8" Jan 26 11:16:09 crc kubenswrapper[4619]: I0126 11:16:09.029829 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f578a895-e882-4e3b-9ef7-3e1d5b14a13f-scripts\") pod \"nova-cell1-cell-mapping-lvbr8\" (UID: \"f578a895-e882-4e3b-9ef7-3e1d5b14a13f\") " pod="openstack/nova-cell1-cell-mapping-lvbr8" Jan 26 11:16:09 crc kubenswrapper[4619]: I0126 11:16:09.029875 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f578a895-e882-4e3b-9ef7-3e1d5b14a13f-config-data\") pod \"nova-cell1-cell-mapping-lvbr8\" (UID: \"f578a895-e882-4e3b-9ef7-3e1d5b14a13f\") " pod="openstack/nova-cell1-cell-mapping-lvbr8" Jan 26 11:16:09 crc kubenswrapper[4619]: I0126 11:16:09.131554 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f578a895-e882-4e3b-9ef7-3e1d5b14a13f-scripts\") pod \"nova-cell1-cell-mapping-lvbr8\" (UID: \"f578a895-e882-4e3b-9ef7-3e1d5b14a13f\") " pod="openstack/nova-cell1-cell-mapping-lvbr8" Jan 26 11:16:09 crc kubenswrapper[4619]: I0126 11:16:09.131621 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f578a895-e882-4e3b-9ef7-3e1d5b14a13f-config-data\") pod \"nova-cell1-cell-mapping-lvbr8\" (UID: \"f578a895-e882-4e3b-9ef7-3e1d5b14a13f\") " pod="openstack/nova-cell1-cell-mapping-lvbr8" Jan 26 11:16:09 crc kubenswrapper[4619]: I0126 11:16:09.131693 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f578a895-e882-4e3b-9ef7-3e1d5b14a13f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lvbr8\" (UID: \"f578a895-e882-4e3b-9ef7-3e1d5b14a13f\") " pod="openstack/nova-cell1-cell-mapping-lvbr8" Jan 26 11:16:09 crc kubenswrapper[4619]: I0126 11:16:09.131711 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc8rx\" (UniqueName: \"kubernetes.io/projected/f578a895-e882-4e3b-9ef7-3e1d5b14a13f-kube-api-access-dc8rx\") pod \"nova-cell1-cell-mapping-lvbr8\" (UID: \"f578a895-e882-4e3b-9ef7-3e1d5b14a13f\") " pod="openstack/nova-cell1-cell-mapping-lvbr8" Jan 26 11:16:09 crc kubenswrapper[4619]: I0126 11:16:09.135152 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f578a895-e882-4e3b-9ef7-3e1d5b14a13f-scripts\") pod \"nova-cell1-cell-mapping-lvbr8\" (UID: \"f578a895-e882-4e3b-9ef7-3e1d5b14a13f\") " pod="openstack/nova-cell1-cell-mapping-lvbr8" Jan 26 11:16:09 crc kubenswrapper[4619]: I0126 11:16:09.135287 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f578a895-e882-4e3b-9ef7-3e1d5b14a13f-config-data\") pod \"nova-cell1-cell-mapping-lvbr8\" (UID: \"f578a895-e882-4e3b-9ef7-3e1d5b14a13f\") " pod="openstack/nova-cell1-cell-mapping-lvbr8" Jan 26 11:16:09 crc kubenswrapper[4619]: I0126 11:16:09.135754 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f578a895-e882-4e3b-9ef7-3e1d5b14a13f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lvbr8\" (UID: \"f578a895-e882-4e3b-9ef7-3e1d5b14a13f\") " pod="openstack/nova-cell1-cell-mapping-lvbr8" Jan 26 11:16:09 crc kubenswrapper[4619]: I0126 11:16:09.149331 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc8rx\" (UniqueName: \"kubernetes.io/projected/f578a895-e882-4e3b-9ef7-3e1d5b14a13f-kube-api-access-dc8rx\") pod \"nova-cell1-cell-mapping-lvbr8\" (UID: \"f578a895-e882-4e3b-9ef7-3e1d5b14a13f\") " pod="openstack/nova-cell1-cell-mapping-lvbr8" Jan 26 11:16:09 crc kubenswrapper[4619]: I0126 11:16:09.283944 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b19b268b-c1eb-4093-87b8-f69e4d95c9d3" path="/var/lib/kubelet/pods/b19b268b-c1eb-4093-87b8-f69e4d95c9d3/volumes" Jan 26 11:16:09 crc kubenswrapper[4619]: I0126 11:16:09.377984 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lvbr8" Jan 26 11:16:09 crc kubenswrapper[4619]: I0126 11:16:09.790974 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df","Type":"ContainerStarted","Data":"8f54815712ec1cf8e0d0a00415e34490169b7ba461f2f20edcf314ece100edf7"} Jan 26 11:16:09 crc kubenswrapper[4619]: I0126 11:16:09.791250 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df","Type":"ContainerStarted","Data":"753e8f4f424c90b0526deac0e4016c40e851f5f3a6c69539e16cf5fdfad46b3c"} Jan 26 11:16:09 crc kubenswrapper[4619]: I0126 11:16:09.796861 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"372ae209-527d-4911-a2fe-c44eb520a653","Type":"ContainerStarted","Data":"5fa58cb4e5aeed6c1c31f65d88e5a8a163d95bf0529590254623d7c67b0bd0ec"} Jan 26 11:16:09 crc kubenswrapper[4619]: I0126 11:16:09.820838 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.820815986 podStartE2EDuration="2.820815986s" podCreationTimestamp="2026-01-26 11:16:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:16:09.807824309 +0000 UTC m=+1268.841865035" watchObservedRunningTime="2026-01-26 11:16:09.820815986 +0000 UTC m=+1268.854856702" Jan 26 11:16:09 crc kubenswrapper[4619]: W0126 11:16:09.862869 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf578a895_e882_4e3b_9ef7_3e1d5b14a13f.slice/crio-496d747369d7584073c72e67490cec4e614814d16144b0aaecc11073e863cc4d WatchSource:0}: Error finding container 496d747369d7584073c72e67490cec4e614814d16144b0aaecc11073e863cc4d: Status 404 returned error can't find the container with id 496d747369d7584073c72e67490cec4e614814d16144b0aaecc11073e863cc4d Jan 26 11:16:09 crc kubenswrapper[4619]: I0126 11:16:09.872785 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-lvbr8"] Jan 26 11:16:10 crc kubenswrapper[4619]: I0126 11:16:10.806522 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lvbr8" event={"ID":"f578a895-e882-4e3b-9ef7-3e1d5b14a13f","Type":"ContainerStarted","Data":"6815efff8cbe4552e31f853e8e4bf22750b69edf18cbd96a5c30176cde068444"} Jan 26 11:16:10 crc kubenswrapper[4619]: I0126 11:16:10.806876 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lvbr8" event={"ID":"f578a895-e882-4e3b-9ef7-3e1d5b14a13f","Type":"ContainerStarted","Data":"496d747369d7584073c72e67490cec4e614814d16144b0aaecc11073e863cc4d"} Jan 26 11:16:10 crc kubenswrapper[4619]: I0126 11:16:10.811336 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"372ae209-527d-4911-a2fe-c44eb520a653","Type":"ContainerStarted","Data":"22a224241875082210f0fe725431e0c6a6e1c1bb19653e04c7c90d1daa9bbfaf"} Jan 26 11:16:10 crc kubenswrapper[4619]: I0126 11:16:10.831235 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-lvbr8" podStartSLOduration=2.831216946 podStartE2EDuration="2.831216946s" podCreationTimestamp="2026-01-26 11:16:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:16:10.826222592 +0000 UTC m=+1269.860263318" watchObservedRunningTime="2026-01-26 11:16:10.831216946 +0000 UTC m=+1269.865257672" Jan 26 11:16:11 crc kubenswrapper[4619]: I0126 11:16:11.283409 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-rkvf5" Jan 26 11:16:11 crc kubenswrapper[4619]: I0126 11:16:11.313978 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.742529978 podStartE2EDuration="6.313963004s" podCreationTimestamp="2026-01-26 11:16:05 +0000 UTC" firstStartedPulling="2026-01-26 11:16:06.685749598 +0000 UTC m=+1265.719790314" lastFinishedPulling="2026-01-26 11:16:10.257182634 +0000 UTC m=+1269.291223340" observedRunningTime="2026-01-26 11:16:10.851317174 +0000 UTC m=+1269.885357890" watchObservedRunningTime="2026-01-26 11:16:11.313963004 +0000 UTC m=+1270.348003720" Jan 26 11:16:11 crc kubenswrapper[4619]: I0126 11:16:11.405906 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-zl7zs"] Jan 26 11:16:11 crc kubenswrapper[4619]: I0126 11:16:11.406139 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" podUID="6260663d-37a2-4439-9db3-d1b74606db09" containerName="dnsmasq-dns" containerID="cri-o://d7c41d08f0808fe850195c144ddbd6902c46168c9cb71400eee4c7b842c77240" gracePeriod=10 Jan 26 11:16:11 crc kubenswrapper[4619]: I0126 11:16:11.821950 4619 generic.go:334] "Generic (PLEG): container finished" podID="6260663d-37a2-4439-9db3-d1b74606db09" containerID="d7c41d08f0808fe850195c144ddbd6902c46168c9cb71400eee4c7b842c77240" exitCode=0 Jan 26 11:16:11 crc kubenswrapper[4619]: I0126 11:16:11.823014 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" event={"ID":"6260663d-37a2-4439-9db3-d1b74606db09","Type":"ContainerDied","Data":"d7c41d08f0808fe850195c144ddbd6902c46168c9cb71400eee4c7b842c77240"} Jan 26 11:16:11 crc kubenswrapper[4619]: I0126 11:16:11.823051 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" event={"ID":"6260663d-37a2-4439-9db3-d1b74606db09","Type":"ContainerDied","Data":"662a072a4a27c834b45e6af78fe0f70bbf8c9471d747cafde33a3930e805bca9"} Jan 26 11:16:11 crc kubenswrapper[4619]: I0126 11:16:11.823062 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="662a072a4a27c834b45e6af78fe0f70bbf8c9471d747cafde33a3930e805bca9" Jan 26 11:16:11 crc kubenswrapper[4619]: I0126 11:16:11.823679 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 11:16:11 crc kubenswrapper[4619]: I0126 11:16:11.928839 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" Jan 26 11:16:12 crc kubenswrapper[4619]: I0126 11:16:12.025237 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6260663d-37a2-4439-9db3-d1b74606db09-ovsdbserver-sb\") pod \"6260663d-37a2-4439-9db3-d1b74606db09\" (UID: \"6260663d-37a2-4439-9db3-d1b74606db09\") " Jan 26 11:16:12 crc kubenswrapper[4619]: I0126 11:16:12.025367 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnm27\" (UniqueName: \"kubernetes.io/projected/6260663d-37a2-4439-9db3-d1b74606db09-kube-api-access-tnm27\") pod \"6260663d-37a2-4439-9db3-d1b74606db09\" (UID: \"6260663d-37a2-4439-9db3-d1b74606db09\") " Jan 26 11:16:12 crc kubenswrapper[4619]: I0126 11:16:12.025399 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6260663d-37a2-4439-9db3-d1b74606db09-dns-svc\") pod \"6260663d-37a2-4439-9db3-d1b74606db09\" (UID: \"6260663d-37a2-4439-9db3-d1b74606db09\") " Jan 26 11:16:12 crc kubenswrapper[4619]: I0126 11:16:12.025441 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6260663d-37a2-4439-9db3-d1b74606db09-dns-swift-storage-0\") pod \"6260663d-37a2-4439-9db3-d1b74606db09\" (UID: \"6260663d-37a2-4439-9db3-d1b74606db09\") " Jan 26 11:16:12 crc kubenswrapper[4619]: I0126 11:16:12.025455 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6260663d-37a2-4439-9db3-d1b74606db09-config\") pod \"6260663d-37a2-4439-9db3-d1b74606db09\" (UID: \"6260663d-37a2-4439-9db3-d1b74606db09\") " Jan 26 11:16:12 crc kubenswrapper[4619]: I0126 11:16:12.025498 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6260663d-37a2-4439-9db3-d1b74606db09-ovsdbserver-nb\") pod \"6260663d-37a2-4439-9db3-d1b74606db09\" (UID: \"6260663d-37a2-4439-9db3-d1b74606db09\") " Jan 26 11:16:12 crc kubenswrapper[4619]: I0126 11:16:12.034106 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6260663d-37a2-4439-9db3-d1b74606db09-kube-api-access-tnm27" (OuterVolumeSpecName: "kube-api-access-tnm27") pod "6260663d-37a2-4439-9db3-d1b74606db09" (UID: "6260663d-37a2-4439-9db3-d1b74606db09"). InnerVolumeSpecName "kube-api-access-tnm27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:16:12 crc kubenswrapper[4619]: I0126 11:16:12.085089 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6260663d-37a2-4439-9db3-d1b74606db09-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6260663d-37a2-4439-9db3-d1b74606db09" (UID: "6260663d-37a2-4439-9db3-d1b74606db09"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:16:12 crc kubenswrapper[4619]: I0126 11:16:12.101121 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6260663d-37a2-4439-9db3-d1b74606db09-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6260663d-37a2-4439-9db3-d1b74606db09" (UID: "6260663d-37a2-4439-9db3-d1b74606db09"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:16:12 crc kubenswrapper[4619]: I0126 11:16:12.108930 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6260663d-37a2-4439-9db3-d1b74606db09-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6260663d-37a2-4439-9db3-d1b74606db09" (UID: "6260663d-37a2-4439-9db3-d1b74606db09"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:16:12 crc kubenswrapper[4619]: I0126 11:16:12.122377 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6260663d-37a2-4439-9db3-d1b74606db09-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6260663d-37a2-4439-9db3-d1b74606db09" (UID: "6260663d-37a2-4439-9db3-d1b74606db09"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:16:12 crc kubenswrapper[4619]: I0126 11:16:12.127972 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnm27\" (UniqueName: \"kubernetes.io/projected/6260663d-37a2-4439-9db3-d1b74606db09-kube-api-access-tnm27\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:12 crc kubenswrapper[4619]: I0126 11:16:12.128012 4619 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6260663d-37a2-4439-9db3-d1b74606db09-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:12 crc kubenswrapper[4619]: I0126 11:16:12.128024 4619 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6260663d-37a2-4439-9db3-d1b74606db09-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:12 crc kubenswrapper[4619]: I0126 11:16:12.128033 4619 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6260663d-37a2-4439-9db3-d1b74606db09-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:12 crc kubenswrapper[4619]: I0126 11:16:12.128042 4619 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6260663d-37a2-4439-9db3-d1b74606db09-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:12 crc kubenswrapper[4619]: I0126 11:16:12.133439 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6260663d-37a2-4439-9db3-d1b74606db09-config" (OuterVolumeSpecName: "config") pod "6260663d-37a2-4439-9db3-d1b74606db09" (UID: "6260663d-37a2-4439-9db3-d1b74606db09"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:16:12 crc kubenswrapper[4619]: I0126 11:16:12.229631 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6260663d-37a2-4439-9db3-d1b74606db09-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:12 crc kubenswrapper[4619]: I0126 11:16:12.829285 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" Jan 26 11:16:12 crc kubenswrapper[4619]: I0126 11:16:12.863790 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-zl7zs"] Jan 26 11:16:12 crc kubenswrapper[4619]: I0126 11:16:12.871512 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-zl7zs"] Jan 26 11:16:13 crc kubenswrapper[4619]: I0126 11:16:13.273033 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6260663d-37a2-4439-9db3-d1b74606db09" path="/var/lib/kubelet/pods/6260663d-37a2-4439-9db3-d1b74606db09/volumes" Jan 26 11:16:14 crc kubenswrapper[4619]: I0126 11:16:14.861378 4619 generic.go:334] "Generic (PLEG): container finished" podID="f578a895-e882-4e3b-9ef7-3e1d5b14a13f" containerID="6815efff8cbe4552e31f853e8e4bf22750b69edf18cbd96a5c30176cde068444" exitCode=0 Jan 26 11:16:14 crc kubenswrapper[4619]: I0126 11:16:14.862724 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lvbr8" event={"ID":"f578a895-e882-4e3b-9ef7-3e1d5b14a13f","Type":"ContainerDied","Data":"6815efff8cbe4552e31f853e8e4bf22750b69edf18cbd96a5c30176cde068444"} Jan 26 11:16:16 crc kubenswrapper[4619]: I0126 11:16:16.239414 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lvbr8" Jan 26 11:16:16 crc kubenswrapper[4619]: I0126 11:16:16.316944 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f578a895-e882-4e3b-9ef7-3e1d5b14a13f-combined-ca-bundle\") pod \"f578a895-e882-4e3b-9ef7-3e1d5b14a13f\" (UID: \"f578a895-e882-4e3b-9ef7-3e1d5b14a13f\") " Jan 26 11:16:16 crc kubenswrapper[4619]: I0126 11:16:16.317419 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc8rx\" (UniqueName: \"kubernetes.io/projected/f578a895-e882-4e3b-9ef7-3e1d5b14a13f-kube-api-access-dc8rx\") pod \"f578a895-e882-4e3b-9ef7-3e1d5b14a13f\" (UID: \"f578a895-e882-4e3b-9ef7-3e1d5b14a13f\") " Jan 26 11:16:16 crc kubenswrapper[4619]: I0126 11:16:16.317447 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f578a895-e882-4e3b-9ef7-3e1d5b14a13f-config-data\") pod \"f578a895-e882-4e3b-9ef7-3e1d5b14a13f\" (UID: \"f578a895-e882-4e3b-9ef7-3e1d5b14a13f\") " Jan 26 11:16:16 crc kubenswrapper[4619]: I0126 11:16:16.317502 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f578a895-e882-4e3b-9ef7-3e1d5b14a13f-scripts\") pod \"f578a895-e882-4e3b-9ef7-3e1d5b14a13f\" (UID: \"f578a895-e882-4e3b-9ef7-3e1d5b14a13f\") " Jan 26 11:16:16 crc kubenswrapper[4619]: I0126 11:16:16.322516 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f578a895-e882-4e3b-9ef7-3e1d5b14a13f-kube-api-access-dc8rx" (OuterVolumeSpecName: "kube-api-access-dc8rx") pod "f578a895-e882-4e3b-9ef7-3e1d5b14a13f" (UID: "f578a895-e882-4e3b-9ef7-3e1d5b14a13f"). InnerVolumeSpecName "kube-api-access-dc8rx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:16:16 crc kubenswrapper[4619]: I0126 11:16:16.323328 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f578a895-e882-4e3b-9ef7-3e1d5b14a13f-scripts" (OuterVolumeSpecName: "scripts") pod "f578a895-e882-4e3b-9ef7-3e1d5b14a13f" (UID: "f578a895-e882-4e3b-9ef7-3e1d5b14a13f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:16:16 crc kubenswrapper[4619]: I0126 11:16:16.353049 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f578a895-e882-4e3b-9ef7-3e1d5b14a13f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f578a895-e882-4e3b-9ef7-3e1d5b14a13f" (UID: "f578a895-e882-4e3b-9ef7-3e1d5b14a13f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:16:16 crc kubenswrapper[4619]: I0126 11:16:16.355975 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f578a895-e882-4e3b-9ef7-3e1d5b14a13f-config-data" (OuterVolumeSpecName: "config-data") pod "f578a895-e882-4e3b-9ef7-3e1d5b14a13f" (UID: "f578a895-e882-4e3b-9ef7-3e1d5b14a13f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:16:16 crc kubenswrapper[4619]: I0126 11:16:16.420230 4619 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f578a895-e882-4e3b-9ef7-3e1d5b14a13f-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:16 crc kubenswrapper[4619]: I0126 11:16:16.420269 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f578a895-e882-4e3b-9ef7-3e1d5b14a13f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:16 crc kubenswrapper[4619]: I0126 11:16:16.420284 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc8rx\" (UniqueName: \"kubernetes.io/projected/f578a895-e882-4e3b-9ef7-3e1d5b14a13f-kube-api-access-dc8rx\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:16 crc kubenswrapper[4619]: I0126 11:16:16.420297 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f578a895-e882-4e3b-9ef7-3e1d5b14a13f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:16 crc kubenswrapper[4619]: I0126 11:16:16.842340 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-757b4f8459-zl7zs" podUID="6260663d-37a2-4439-9db3-d1b74606db09" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.194:5353: i/o timeout" Jan 26 11:16:16 crc kubenswrapper[4619]: I0126 11:16:16.882096 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lvbr8" event={"ID":"f578a895-e882-4e3b-9ef7-3e1d5b14a13f","Type":"ContainerDied","Data":"496d747369d7584073c72e67490cec4e614814d16144b0aaecc11073e863cc4d"} Jan 26 11:16:16 crc kubenswrapper[4619]: I0126 11:16:16.882140 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="496d747369d7584073c72e67490cec4e614814d16144b0aaecc11073e863cc4d" Jan 26 11:16:16 crc kubenswrapper[4619]: I0126 11:16:16.882594 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lvbr8" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.080647 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.080856 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7dc3e4f6-b340-4b39-a830-9a0d4b89d1df" containerName="nova-api-log" containerID="cri-o://753e8f4f424c90b0526deac0e4016c40e851f5f3a6c69539e16cf5fdfad46b3c" gracePeriod=30 Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.081215 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7dc3e4f6-b340-4b39-a830-9a0d4b89d1df" containerName="nova-api-api" containerID="cri-o://8f54815712ec1cf8e0d0a00415e34490169b7ba461f2f20edcf314ece100edf7" gracePeriod=30 Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.099941 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.100189 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="93659a38-4ded-4760-bd12-e6eddbe28f53" containerName="nova-scheduler-scheduler" containerID="cri-o://fe8acd747f9b4889c158c378547dbe65db294d9bc1ac84b2063fffd5ac8fa2cd" gracePeriod=30 Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.182907 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.183134 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9eb01a17-aaf3-41e3-b9e9-bca348d41a5e" containerName="nova-metadata-log" containerID="cri-o://c15231189ba7eb4c45808f091e1144ff4e7a76eb7bbf5463fd487bfd923f1e5f" gracePeriod=30 Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.183193 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9eb01a17-aaf3-41e3-b9e9-bca348d41a5e" containerName="nova-metadata-metadata" containerID="cri-o://18f67d0afd4a84dbce7261e102c759d66e24a231e69767e45610b4b26b50e1b3" gracePeriod=30 Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.671015 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.742976 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-public-tls-certs\") pod \"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df\" (UID: \"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df\") " Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.743048 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-combined-ca-bundle\") pod \"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df\" (UID: \"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df\") " Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.743095 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-logs\") pod \"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df\" (UID: \"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df\") " Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.743146 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pk9f5\" (UniqueName: \"kubernetes.io/projected/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-kube-api-access-pk9f5\") pod \"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df\" (UID: \"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df\") " Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.743171 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-internal-tls-certs\") pod \"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df\" (UID: \"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df\") " Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.743202 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-config-data\") pod \"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df\" (UID: \"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df\") " Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.746069 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-logs" (OuterVolumeSpecName: "logs") pod "7dc3e4f6-b340-4b39-a830-9a0d4b89d1df" (UID: "7dc3e4f6-b340-4b39-a830-9a0d4b89d1df"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.760001 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-kube-api-access-pk9f5" (OuterVolumeSpecName: "kube-api-access-pk9f5") pod "7dc3e4f6-b340-4b39-a830-9a0d4b89d1df" (UID: "7dc3e4f6-b340-4b39-a830-9a0d4b89d1df"). InnerVolumeSpecName "kube-api-access-pk9f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.784133 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7dc3e4f6-b340-4b39-a830-9a0d4b89d1df" (UID: "7dc3e4f6-b340-4b39-a830-9a0d4b89d1df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.784171 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-config-data" (OuterVolumeSpecName: "config-data") pod "7dc3e4f6-b340-4b39-a830-9a0d4b89d1df" (UID: "7dc3e4f6-b340-4b39-a830-9a0d4b89d1df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.799412 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7dc3e4f6-b340-4b39-a830-9a0d4b89d1df" (UID: "7dc3e4f6-b340-4b39-a830-9a0d4b89d1df"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.804854 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7dc3e4f6-b340-4b39-a830-9a0d4b89d1df" (UID: "7dc3e4f6-b340-4b39-a830-9a0d4b89d1df"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.844952 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pk9f5\" (UniqueName: \"kubernetes.io/projected/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-kube-api-access-pk9f5\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.844984 4619 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.844993 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.845004 4619 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.845013 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.845020 4619 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.896039 4619 generic.go:334] "Generic (PLEG): container finished" podID="7dc3e4f6-b340-4b39-a830-9a0d4b89d1df" containerID="8f54815712ec1cf8e0d0a00415e34490169b7ba461f2f20edcf314ece100edf7" exitCode=0 Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.896067 4619 generic.go:334] "Generic (PLEG): container finished" podID="7dc3e4f6-b340-4b39-a830-9a0d4b89d1df" containerID="753e8f4f424c90b0526deac0e4016c40e851f5f3a6c69539e16cf5fdfad46b3c" exitCode=143 Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.896110 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df","Type":"ContainerDied","Data":"8f54815712ec1cf8e0d0a00415e34490169b7ba461f2f20edcf314ece100edf7"} Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.896133 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df","Type":"ContainerDied","Data":"753e8f4f424c90b0526deac0e4016c40e851f5f3a6c69539e16cf5fdfad46b3c"} Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.896144 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7dc3e4f6-b340-4b39-a830-9a0d4b89d1df","Type":"ContainerDied","Data":"3b88787a7d1463b67b2a852ede73730a39c6d0d3f161afa6b2b0986fc8c99f7c"} Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.896158 4619 scope.go:117] "RemoveContainer" containerID="8f54815712ec1cf8e0d0a00415e34490169b7ba461f2f20edcf314ece100edf7" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.896281 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.899985 4619 generic.go:334] "Generic (PLEG): container finished" podID="9eb01a17-aaf3-41e3-b9e9-bca348d41a5e" containerID="c15231189ba7eb4c45808f091e1144ff4e7a76eb7bbf5463fd487bfd923f1e5f" exitCode=143 Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.900032 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9eb01a17-aaf3-41e3-b9e9-bca348d41a5e","Type":"ContainerDied","Data":"c15231189ba7eb4c45808f091e1144ff4e7a76eb7bbf5463fd487bfd923f1e5f"} Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.932316 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.937419 4619 scope.go:117] "RemoveContainer" containerID="753e8f4f424c90b0526deac0e4016c40e851f5f3a6c69539e16cf5fdfad46b3c" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.959237 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.970778 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 11:16:17 crc kubenswrapper[4619]: E0126 11:16:17.971181 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6260663d-37a2-4439-9db3-d1b74606db09" containerName="dnsmasq-dns" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.971197 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="6260663d-37a2-4439-9db3-d1b74606db09" containerName="dnsmasq-dns" Jan 26 11:16:17 crc kubenswrapper[4619]: E0126 11:16:17.971207 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f578a895-e882-4e3b-9ef7-3e1d5b14a13f" containerName="nova-manage" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.971215 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="f578a895-e882-4e3b-9ef7-3e1d5b14a13f" containerName="nova-manage" Jan 26 11:16:17 crc kubenswrapper[4619]: E0126 11:16:17.971227 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dc3e4f6-b340-4b39-a830-9a0d4b89d1df" containerName="nova-api-log" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.971233 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dc3e4f6-b340-4b39-a830-9a0d4b89d1df" containerName="nova-api-log" Jan 26 11:16:17 crc kubenswrapper[4619]: E0126 11:16:17.971247 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dc3e4f6-b340-4b39-a830-9a0d4b89d1df" containerName="nova-api-api" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.971254 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dc3e4f6-b340-4b39-a830-9a0d4b89d1df" containerName="nova-api-api" Jan 26 11:16:17 crc kubenswrapper[4619]: E0126 11:16:17.971281 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6260663d-37a2-4439-9db3-d1b74606db09" containerName="init" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.971288 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="6260663d-37a2-4439-9db3-d1b74606db09" containerName="init" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.971444 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dc3e4f6-b340-4b39-a830-9a0d4b89d1df" containerName="nova-api-api" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.971453 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dc3e4f6-b340-4b39-a830-9a0d4b89d1df" containerName="nova-api-log" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.971463 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="6260663d-37a2-4439-9db3-d1b74606db09" containerName="dnsmasq-dns" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.971472 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="f578a895-e882-4e3b-9ef7-3e1d5b14a13f" containerName="nova-manage" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.972509 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.976859 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.976937 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.977035 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.986043 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.990842 4619 scope.go:117] "RemoveContainer" containerID="8f54815712ec1cf8e0d0a00415e34490169b7ba461f2f20edcf314ece100edf7" Jan 26 11:16:17 crc kubenswrapper[4619]: E0126 11:16:17.994073 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f54815712ec1cf8e0d0a00415e34490169b7ba461f2f20edcf314ece100edf7\": container with ID starting with 8f54815712ec1cf8e0d0a00415e34490169b7ba461f2f20edcf314ece100edf7 not found: ID does not exist" containerID="8f54815712ec1cf8e0d0a00415e34490169b7ba461f2f20edcf314ece100edf7" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.994115 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f54815712ec1cf8e0d0a00415e34490169b7ba461f2f20edcf314ece100edf7"} err="failed to get container status \"8f54815712ec1cf8e0d0a00415e34490169b7ba461f2f20edcf314ece100edf7\": rpc error: code = NotFound desc = could not find container \"8f54815712ec1cf8e0d0a00415e34490169b7ba461f2f20edcf314ece100edf7\": container with ID starting with 8f54815712ec1cf8e0d0a00415e34490169b7ba461f2f20edcf314ece100edf7 not found: ID does not exist" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.994143 4619 scope.go:117] "RemoveContainer" containerID="753e8f4f424c90b0526deac0e4016c40e851f5f3a6c69539e16cf5fdfad46b3c" Jan 26 11:16:17 crc kubenswrapper[4619]: E0126 11:16:17.994847 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"753e8f4f424c90b0526deac0e4016c40e851f5f3a6c69539e16cf5fdfad46b3c\": container with ID starting with 753e8f4f424c90b0526deac0e4016c40e851f5f3a6c69539e16cf5fdfad46b3c not found: ID does not exist" containerID="753e8f4f424c90b0526deac0e4016c40e851f5f3a6c69539e16cf5fdfad46b3c" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.994878 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"753e8f4f424c90b0526deac0e4016c40e851f5f3a6c69539e16cf5fdfad46b3c"} err="failed to get container status \"753e8f4f424c90b0526deac0e4016c40e851f5f3a6c69539e16cf5fdfad46b3c\": rpc error: code = NotFound desc = could not find container \"753e8f4f424c90b0526deac0e4016c40e851f5f3a6c69539e16cf5fdfad46b3c\": container with ID starting with 753e8f4f424c90b0526deac0e4016c40e851f5f3a6c69539e16cf5fdfad46b3c not found: ID does not exist" Jan 26 11:16:17 crc kubenswrapper[4619]: I0126 11:16:17.994892 4619 scope.go:117] "RemoveContainer" containerID="8f54815712ec1cf8e0d0a00415e34490169b7ba461f2f20edcf314ece100edf7" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.007843 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f54815712ec1cf8e0d0a00415e34490169b7ba461f2f20edcf314ece100edf7"} err="failed to get container status \"8f54815712ec1cf8e0d0a00415e34490169b7ba461f2f20edcf314ece100edf7\": rpc error: code = NotFound desc = could not find container \"8f54815712ec1cf8e0d0a00415e34490169b7ba461f2f20edcf314ece100edf7\": container with ID starting with 8f54815712ec1cf8e0d0a00415e34490169b7ba461f2f20edcf314ece100edf7 not found: ID does not exist" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.007883 4619 scope.go:117] "RemoveContainer" containerID="753e8f4f424c90b0526deac0e4016c40e851f5f3a6c69539e16cf5fdfad46b3c" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.009586 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"753e8f4f424c90b0526deac0e4016c40e851f5f3a6c69539e16cf5fdfad46b3c"} err="failed to get container status \"753e8f4f424c90b0526deac0e4016c40e851f5f3a6c69539e16cf5fdfad46b3c\": rpc error: code = NotFound desc = could not find container \"753e8f4f424c90b0526deac0e4016c40e851f5f3a6c69539e16cf5fdfad46b3c\": container with ID starting with 753e8f4f424c90b0526deac0e4016c40e851f5f3a6c69539e16cf5fdfad46b3c not found: ID does not exist" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.048255 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a99ba972-f513-421c-b25d-c8ecbc095c0f-config-data\") pod \"nova-api-0\" (UID: \"a99ba972-f513-421c-b25d-c8ecbc095c0f\") " pod="openstack/nova-api-0" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.048308 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a99ba972-f513-421c-b25d-c8ecbc095c0f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a99ba972-f513-421c-b25d-c8ecbc095c0f\") " pod="openstack/nova-api-0" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.048347 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a99ba972-f513-421c-b25d-c8ecbc095c0f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a99ba972-f513-421c-b25d-c8ecbc095c0f\") " pod="openstack/nova-api-0" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.048516 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a99ba972-f513-421c-b25d-c8ecbc095c0f-logs\") pod \"nova-api-0\" (UID: \"a99ba972-f513-421c-b25d-c8ecbc095c0f\") " pod="openstack/nova-api-0" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.048643 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a99ba972-f513-421c-b25d-c8ecbc095c0f-public-tls-certs\") pod \"nova-api-0\" (UID: \"a99ba972-f513-421c-b25d-c8ecbc095c0f\") " pod="openstack/nova-api-0" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.048674 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v572s\" (UniqueName: \"kubernetes.io/projected/a99ba972-f513-421c-b25d-c8ecbc095c0f-kube-api-access-v572s\") pod \"nova-api-0\" (UID: \"a99ba972-f513-421c-b25d-c8ecbc095c0f\") " pod="openstack/nova-api-0" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.149885 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a99ba972-f513-421c-b25d-c8ecbc095c0f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a99ba972-f513-421c-b25d-c8ecbc095c0f\") " pod="openstack/nova-api-0" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.150141 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a99ba972-f513-421c-b25d-c8ecbc095c0f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a99ba972-f513-421c-b25d-c8ecbc095c0f\") " pod="openstack/nova-api-0" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.150275 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a99ba972-f513-421c-b25d-c8ecbc095c0f-logs\") pod \"nova-api-0\" (UID: \"a99ba972-f513-421c-b25d-c8ecbc095c0f\") " pod="openstack/nova-api-0" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.150366 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a99ba972-f513-421c-b25d-c8ecbc095c0f-public-tls-certs\") pod \"nova-api-0\" (UID: \"a99ba972-f513-421c-b25d-c8ecbc095c0f\") " pod="openstack/nova-api-0" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.150445 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v572s\" (UniqueName: \"kubernetes.io/projected/a99ba972-f513-421c-b25d-c8ecbc095c0f-kube-api-access-v572s\") pod \"nova-api-0\" (UID: \"a99ba972-f513-421c-b25d-c8ecbc095c0f\") " pod="openstack/nova-api-0" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.150587 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a99ba972-f513-421c-b25d-c8ecbc095c0f-config-data\") pod \"nova-api-0\" (UID: \"a99ba972-f513-421c-b25d-c8ecbc095c0f\") " pod="openstack/nova-api-0" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.151432 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a99ba972-f513-421c-b25d-c8ecbc095c0f-logs\") pod \"nova-api-0\" (UID: \"a99ba972-f513-421c-b25d-c8ecbc095c0f\") " pod="openstack/nova-api-0" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.154318 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a99ba972-f513-421c-b25d-c8ecbc095c0f-config-data\") pod \"nova-api-0\" (UID: \"a99ba972-f513-421c-b25d-c8ecbc095c0f\") " pod="openstack/nova-api-0" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.154814 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a99ba972-f513-421c-b25d-c8ecbc095c0f-public-tls-certs\") pod \"nova-api-0\" (UID: \"a99ba972-f513-421c-b25d-c8ecbc095c0f\") " pod="openstack/nova-api-0" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.155076 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a99ba972-f513-421c-b25d-c8ecbc095c0f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a99ba972-f513-421c-b25d-c8ecbc095c0f\") " pod="openstack/nova-api-0" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.155631 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a99ba972-f513-421c-b25d-c8ecbc095c0f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a99ba972-f513-421c-b25d-c8ecbc095c0f\") " pod="openstack/nova-api-0" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.169074 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v572s\" (UniqueName: \"kubernetes.io/projected/a99ba972-f513-421c-b25d-c8ecbc095c0f-kube-api-access-v572s\") pod \"nova-api-0\" (UID: \"a99ba972-f513-421c-b25d-c8ecbc095c0f\") " pod="openstack/nova-api-0" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.320023 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.720321 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.763278 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93659a38-4ded-4760-bd12-e6eddbe28f53-config-data\") pod \"93659a38-4ded-4760-bd12-e6eddbe28f53\" (UID: \"93659a38-4ded-4760-bd12-e6eddbe28f53\") " Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.763385 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93659a38-4ded-4760-bd12-e6eddbe28f53-combined-ca-bundle\") pod \"93659a38-4ded-4760-bd12-e6eddbe28f53\" (UID: \"93659a38-4ded-4760-bd12-e6eddbe28f53\") " Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.763425 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m8mg\" (UniqueName: \"kubernetes.io/projected/93659a38-4ded-4760-bd12-e6eddbe28f53-kube-api-access-5m8mg\") pod \"93659a38-4ded-4760-bd12-e6eddbe28f53\" (UID: \"93659a38-4ded-4760-bd12-e6eddbe28f53\") " Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.771928 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93659a38-4ded-4760-bd12-e6eddbe28f53-kube-api-access-5m8mg" (OuterVolumeSpecName: "kube-api-access-5m8mg") pod "93659a38-4ded-4760-bd12-e6eddbe28f53" (UID: "93659a38-4ded-4760-bd12-e6eddbe28f53"). InnerVolumeSpecName "kube-api-access-5m8mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.789995 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93659a38-4ded-4760-bd12-e6eddbe28f53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93659a38-4ded-4760-bd12-e6eddbe28f53" (UID: "93659a38-4ded-4760-bd12-e6eddbe28f53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.792282 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93659a38-4ded-4760-bd12-e6eddbe28f53-config-data" (OuterVolumeSpecName: "config-data") pod "93659a38-4ded-4760-bd12-e6eddbe28f53" (UID: "93659a38-4ded-4760-bd12-e6eddbe28f53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.845605 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.866367 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93659a38-4ded-4760-bd12-e6eddbe28f53-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.866404 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93659a38-4ded-4760-bd12-e6eddbe28f53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.866419 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m8mg\" (UniqueName: \"kubernetes.io/projected/93659a38-4ded-4760-bd12-e6eddbe28f53-kube-api-access-5m8mg\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.909966 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a99ba972-f513-421c-b25d-c8ecbc095c0f","Type":"ContainerStarted","Data":"f101fb2bdc50c0894ebaaf60d5e16df0452af3850c909c55dd371d87ecb321bc"} Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.919985 4619 generic.go:334] "Generic (PLEG): container finished" podID="93659a38-4ded-4760-bd12-e6eddbe28f53" containerID="fe8acd747f9b4889c158c378547dbe65db294d9bc1ac84b2063fffd5ac8fa2cd" exitCode=0 Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.920030 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"93659a38-4ded-4760-bd12-e6eddbe28f53","Type":"ContainerDied","Data":"fe8acd747f9b4889c158c378547dbe65db294d9bc1ac84b2063fffd5ac8fa2cd"} Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.920057 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"93659a38-4ded-4760-bd12-e6eddbe28f53","Type":"ContainerDied","Data":"3bb0507d81ab6f290e10729bfa54fdc1f333d600fe7e818a731730b6256efbaf"} Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.920073 4619 scope.go:117] "RemoveContainer" containerID="fe8acd747f9b4889c158c378547dbe65db294d9bc1ac84b2063fffd5ac8fa2cd" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.920077 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.942165 4619 scope.go:117] "RemoveContainer" containerID="fe8acd747f9b4889c158c378547dbe65db294d9bc1ac84b2063fffd5ac8fa2cd" Jan 26 11:16:18 crc kubenswrapper[4619]: E0126 11:16:18.942548 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe8acd747f9b4889c158c378547dbe65db294d9bc1ac84b2063fffd5ac8fa2cd\": container with ID starting with fe8acd747f9b4889c158c378547dbe65db294d9bc1ac84b2063fffd5ac8fa2cd not found: ID does not exist" containerID="fe8acd747f9b4889c158c378547dbe65db294d9bc1ac84b2063fffd5ac8fa2cd" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.942576 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe8acd747f9b4889c158c378547dbe65db294d9bc1ac84b2063fffd5ac8fa2cd"} err="failed to get container status \"fe8acd747f9b4889c158c378547dbe65db294d9bc1ac84b2063fffd5ac8fa2cd\": rpc error: code = NotFound desc = could not find container \"fe8acd747f9b4889c158c378547dbe65db294d9bc1ac84b2063fffd5ac8fa2cd\": container with ID starting with fe8acd747f9b4889c158c378547dbe65db294d9bc1ac84b2063fffd5ac8fa2cd not found: ID does not exist" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.975073 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.983971 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.992632 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 11:16:18 crc kubenswrapper[4619]: E0126 11:16:18.993078 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93659a38-4ded-4760-bd12-e6eddbe28f53" containerName="nova-scheduler-scheduler" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.993096 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="93659a38-4ded-4760-bd12-e6eddbe28f53" containerName="nova-scheduler-scheduler" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.993292 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="93659a38-4ded-4760-bd12-e6eddbe28f53" containerName="nova-scheduler-scheduler" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.993980 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 11:16:18 crc kubenswrapper[4619]: I0126 11:16:18.999675 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 11:16:19 crc kubenswrapper[4619]: I0126 11:16:19.005795 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 11:16:19 crc kubenswrapper[4619]: I0126 11:16:19.070489 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a018ea11-c0b7-4523-b3f4-1367bb0073fd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a018ea11-c0b7-4523-b3f4-1367bb0073fd\") " pod="openstack/nova-scheduler-0" Jan 26 11:16:19 crc kubenswrapper[4619]: I0126 11:16:19.070639 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77wws\" (UniqueName: \"kubernetes.io/projected/a018ea11-c0b7-4523-b3f4-1367bb0073fd-kube-api-access-77wws\") pod \"nova-scheduler-0\" (UID: \"a018ea11-c0b7-4523-b3f4-1367bb0073fd\") " pod="openstack/nova-scheduler-0" Jan 26 11:16:19 crc kubenswrapper[4619]: I0126 11:16:19.070809 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a018ea11-c0b7-4523-b3f4-1367bb0073fd-config-data\") pod \"nova-scheduler-0\" (UID: \"a018ea11-c0b7-4523-b3f4-1367bb0073fd\") " pod="openstack/nova-scheduler-0" Jan 26 11:16:19 crc kubenswrapper[4619]: I0126 11:16:19.172003 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a018ea11-c0b7-4523-b3f4-1367bb0073fd-config-data\") pod \"nova-scheduler-0\" (UID: \"a018ea11-c0b7-4523-b3f4-1367bb0073fd\") " pod="openstack/nova-scheduler-0" Jan 26 11:16:19 crc kubenswrapper[4619]: I0126 11:16:19.172085 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a018ea11-c0b7-4523-b3f4-1367bb0073fd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a018ea11-c0b7-4523-b3f4-1367bb0073fd\") " pod="openstack/nova-scheduler-0" Jan 26 11:16:19 crc kubenswrapper[4619]: I0126 11:16:19.172120 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77wws\" (UniqueName: \"kubernetes.io/projected/a018ea11-c0b7-4523-b3f4-1367bb0073fd-kube-api-access-77wws\") pod \"nova-scheduler-0\" (UID: \"a018ea11-c0b7-4523-b3f4-1367bb0073fd\") " pod="openstack/nova-scheduler-0" Jan 26 11:16:19 crc kubenswrapper[4619]: I0126 11:16:19.177452 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a018ea11-c0b7-4523-b3f4-1367bb0073fd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a018ea11-c0b7-4523-b3f4-1367bb0073fd\") " pod="openstack/nova-scheduler-0" Jan 26 11:16:19 crc kubenswrapper[4619]: I0126 11:16:19.178134 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a018ea11-c0b7-4523-b3f4-1367bb0073fd-config-data\") pod \"nova-scheduler-0\" (UID: \"a018ea11-c0b7-4523-b3f4-1367bb0073fd\") " pod="openstack/nova-scheduler-0" Jan 26 11:16:19 crc kubenswrapper[4619]: I0126 11:16:19.185809 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77wws\" (UniqueName: \"kubernetes.io/projected/a018ea11-c0b7-4523-b3f4-1367bb0073fd-kube-api-access-77wws\") pod \"nova-scheduler-0\" (UID: \"a018ea11-c0b7-4523-b3f4-1367bb0073fd\") " pod="openstack/nova-scheduler-0" Jan 26 11:16:19 crc kubenswrapper[4619]: I0126 11:16:19.280478 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dc3e4f6-b340-4b39-a830-9a0d4b89d1df" path="/var/lib/kubelet/pods/7dc3e4f6-b340-4b39-a830-9a0d4b89d1df/volumes" Jan 26 11:16:19 crc kubenswrapper[4619]: I0126 11:16:19.281103 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93659a38-4ded-4760-bd12-e6eddbe28f53" path="/var/lib/kubelet/pods/93659a38-4ded-4760-bd12-e6eddbe28f53/volumes" Jan 26 11:16:19 crc kubenswrapper[4619]: I0126 11:16:19.323389 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 11:16:19 crc kubenswrapper[4619]: I0126 11:16:19.843043 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 11:16:19 crc kubenswrapper[4619]: W0126 11:16:19.859821 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda018ea11_c0b7_4523_b3f4_1367bb0073fd.slice/crio-0ed51a6c9aaa2907b42d369065b4f3cda4afd00b01986beb10434d8f81e2e7d7 WatchSource:0}: Error finding container 0ed51a6c9aaa2907b42d369065b4f3cda4afd00b01986beb10434d8f81e2e7d7: Status 404 returned error can't find the container with id 0ed51a6c9aaa2907b42d369065b4f3cda4afd00b01986beb10434d8f81e2e7d7 Jan 26 11:16:19 crc kubenswrapper[4619]: I0126 11:16:19.934422 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a99ba972-f513-421c-b25d-c8ecbc095c0f","Type":"ContainerStarted","Data":"89156db8b9a5f2f3c81e17158762092552500da753395772ade15db8558a0221"} Jan 26 11:16:19 crc kubenswrapper[4619]: I0126 11:16:19.934493 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a99ba972-f513-421c-b25d-c8ecbc095c0f","Type":"ContainerStarted","Data":"296789ffc72e54a4280d9b71287bd5e4ebdd330de92368fdf4ec4ee8635b1543"} Jan 26 11:16:19 crc kubenswrapper[4619]: I0126 11:16:19.939452 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a018ea11-c0b7-4523-b3f4-1367bb0073fd","Type":"ContainerStarted","Data":"0ed51a6c9aaa2907b42d369065b4f3cda4afd00b01986beb10434d8f81e2e7d7"} Jan 26 11:16:19 crc kubenswrapper[4619]: I0126 11:16:19.958576 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.958551722 podStartE2EDuration="2.958551722s" podCreationTimestamp="2026-01-26 11:16:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:16:19.954586127 +0000 UTC m=+1278.988626833" watchObservedRunningTime="2026-01-26 11:16:19.958551722 +0000 UTC m=+1278.992592468" Jan 26 11:16:20 crc kubenswrapper[4619]: I0126 11:16:20.757295 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 11:16:20 crc kubenswrapper[4619]: I0126 11:16:20.812779 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb01a17-aaf3-41e3-b9e9-bca348d41a5e-config-data\") pod \"9eb01a17-aaf3-41e3-b9e9-bca348d41a5e\" (UID: \"9eb01a17-aaf3-41e3-b9e9-bca348d41a5e\") " Jan 26 11:16:20 crc kubenswrapper[4619]: I0126 11:16:20.812866 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cg6z\" (UniqueName: \"kubernetes.io/projected/9eb01a17-aaf3-41e3-b9e9-bca348d41a5e-kube-api-access-5cg6z\") pod \"9eb01a17-aaf3-41e3-b9e9-bca348d41a5e\" (UID: \"9eb01a17-aaf3-41e3-b9e9-bca348d41a5e\") " Jan 26 11:16:20 crc kubenswrapper[4619]: I0126 11:16:20.812890 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb01a17-aaf3-41e3-b9e9-bca348d41a5e-combined-ca-bundle\") pod \"9eb01a17-aaf3-41e3-b9e9-bca348d41a5e\" (UID: \"9eb01a17-aaf3-41e3-b9e9-bca348d41a5e\") " Jan 26 11:16:20 crc kubenswrapper[4619]: I0126 11:16:20.812913 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb01a17-aaf3-41e3-b9e9-bca348d41a5e-nova-metadata-tls-certs\") pod \"9eb01a17-aaf3-41e3-b9e9-bca348d41a5e\" (UID: \"9eb01a17-aaf3-41e3-b9e9-bca348d41a5e\") " Jan 26 11:16:20 crc kubenswrapper[4619]: I0126 11:16:20.813003 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eb01a17-aaf3-41e3-b9e9-bca348d41a5e-logs\") pod \"9eb01a17-aaf3-41e3-b9e9-bca348d41a5e\" (UID: \"9eb01a17-aaf3-41e3-b9e9-bca348d41a5e\") " Jan 26 11:16:20 crc kubenswrapper[4619]: I0126 11:16:20.813897 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9eb01a17-aaf3-41e3-b9e9-bca348d41a5e-logs" (OuterVolumeSpecName: "logs") pod "9eb01a17-aaf3-41e3-b9e9-bca348d41a5e" (UID: "9eb01a17-aaf3-41e3-b9e9-bca348d41a5e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:16:20 crc kubenswrapper[4619]: I0126 11:16:20.840799 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eb01a17-aaf3-41e3-b9e9-bca348d41a5e-kube-api-access-5cg6z" (OuterVolumeSpecName: "kube-api-access-5cg6z") pod "9eb01a17-aaf3-41e3-b9e9-bca348d41a5e" (UID: "9eb01a17-aaf3-41e3-b9e9-bca348d41a5e"). InnerVolumeSpecName "kube-api-access-5cg6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:16:20 crc kubenswrapper[4619]: I0126 11:16:20.870820 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb01a17-aaf3-41e3-b9e9-bca348d41a5e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9eb01a17-aaf3-41e3-b9e9-bca348d41a5e" (UID: "9eb01a17-aaf3-41e3-b9e9-bca348d41a5e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:16:20 crc kubenswrapper[4619]: I0126 11:16:20.878830 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb01a17-aaf3-41e3-b9e9-bca348d41a5e-config-data" (OuterVolumeSpecName: "config-data") pod "9eb01a17-aaf3-41e3-b9e9-bca348d41a5e" (UID: "9eb01a17-aaf3-41e3-b9e9-bca348d41a5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:16:20 crc kubenswrapper[4619]: I0126 11:16:20.915324 4619 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eb01a17-aaf3-41e3-b9e9-bca348d41a5e-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:20 crc kubenswrapper[4619]: I0126 11:16:20.915348 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb01a17-aaf3-41e3-b9e9-bca348d41a5e-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:20 crc kubenswrapper[4619]: I0126 11:16:20.915359 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cg6z\" (UniqueName: \"kubernetes.io/projected/9eb01a17-aaf3-41e3-b9e9-bca348d41a5e-kube-api-access-5cg6z\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:20 crc kubenswrapper[4619]: I0126 11:16:20.915370 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb01a17-aaf3-41e3-b9e9-bca348d41a5e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:20 crc kubenswrapper[4619]: I0126 11:16:20.924325 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb01a17-aaf3-41e3-b9e9-bca348d41a5e-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "9eb01a17-aaf3-41e3-b9e9-bca348d41a5e" (UID: "9eb01a17-aaf3-41e3-b9e9-bca348d41a5e"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:16:20 crc kubenswrapper[4619]: I0126 11:16:20.952061 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a018ea11-c0b7-4523-b3f4-1367bb0073fd","Type":"ContainerStarted","Data":"bd6f89f4dae74fe6ad443b5ba56a2285e092f2463f2542d02ceeabcace906707"} Jan 26 11:16:20 crc kubenswrapper[4619]: I0126 11:16:20.954432 4619 generic.go:334] "Generic (PLEG): container finished" podID="9eb01a17-aaf3-41e3-b9e9-bca348d41a5e" containerID="18f67d0afd4a84dbce7261e102c759d66e24a231e69767e45610b4b26b50e1b3" exitCode=0 Jan 26 11:16:20 crc kubenswrapper[4619]: I0126 11:16:20.954921 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 11:16:20 crc kubenswrapper[4619]: I0126 11:16:20.956802 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9eb01a17-aaf3-41e3-b9e9-bca348d41a5e","Type":"ContainerDied","Data":"18f67d0afd4a84dbce7261e102c759d66e24a231e69767e45610b4b26b50e1b3"} Jan 26 11:16:20 crc kubenswrapper[4619]: I0126 11:16:20.956853 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9eb01a17-aaf3-41e3-b9e9-bca348d41a5e","Type":"ContainerDied","Data":"6db8706fa66c2be69736b72fa10f04d0171c977a3206b0ec4fc9dd20e237bfba"} Jan 26 11:16:20 crc kubenswrapper[4619]: I0126 11:16:20.956874 4619 scope.go:117] "RemoveContainer" containerID="18f67d0afd4a84dbce7261e102c759d66e24a231e69767e45610b4b26b50e1b3" Jan 26 11:16:20 crc kubenswrapper[4619]: I0126 11:16:20.987311 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.987289943 podStartE2EDuration="2.987289943s" podCreationTimestamp="2026-01-26 11:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:16:20.975820316 +0000 UTC m=+1280.009861052" watchObservedRunningTime="2026-01-26 11:16:20.987289943 +0000 UTC m=+1280.021330659" Jan 26 11:16:20 crc kubenswrapper[4619]: I0126 11:16:20.993245 4619 scope.go:117] "RemoveContainer" containerID="c15231189ba7eb4c45808f091e1144ff4e7a76eb7bbf5463fd487bfd923f1e5f" Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.001524 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.017774 4619 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb01a17-aaf3-41e3-b9e9-bca348d41a5e-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.030166 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.032903 4619 scope.go:117] "RemoveContainer" containerID="18f67d0afd4a84dbce7261e102c759d66e24a231e69767e45610b4b26b50e1b3" Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.039839 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:16:21 crc kubenswrapper[4619]: E0126 11:16:21.040273 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb01a17-aaf3-41e3-b9e9-bca348d41a5e" containerName="nova-metadata-metadata" Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.040291 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb01a17-aaf3-41e3-b9e9-bca348d41a5e" containerName="nova-metadata-metadata" Jan 26 11:16:21 crc kubenswrapper[4619]: E0126 11:16:21.040312 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb01a17-aaf3-41e3-b9e9-bca348d41a5e" containerName="nova-metadata-log" Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.040320 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb01a17-aaf3-41e3-b9e9-bca348d41a5e" containerName="nova-metadata-log" Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.040486 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eb01a17-aaf3-41e3-b9e9-bca348d41a5e" containerName="nova-metadata-log" Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.040512 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eb01a17-aaf3-41e3-b9e9-bca348d41a5e" containerName="nova-metadata-metadata" Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.041558 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.047554 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:16:21 crc kubenswrapper[4619]: E0126 11:16:21.052987 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18f67d0afd4a84dbce7261e102c759d66e24a231e69767e45610b4b26b50e1b3\": container with ID starting with 18f67d0afd4a84dbce7261e102c759d66e24a231e69767e45610b4b26b50e1b3 not found: ID does not exist" containerID="18f67d0afd4a84dbce7261e102c759d66e24a231e69767e45610b4b26b50e1b3" Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.053041 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18f67d0afd4a84dbce7261e102c759d66e24a231e69767e45610b4b26b50e1b3"} err="failed to get container status \"18f67d0afd4a84dbce7261e102c759d66e24a231e69767e45610b4b26b50e1b3\": rpc error: code = NotFound desc = could not find container \"18f67d0afd4a84dbce7261e102c759d66e24a231e69767e45610b4b26b50e1b3\": container with ID starting with 18f67d0afd4a84dbce7261e102c759d66e24a231e69767e45610b4b26b50e1b3 not found: ID does not exist" Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.053073 4619 scope.go:117] "RemoveContainer" containerID="c15231189ba7eb4c45808f091e1144ff4e7a76eb7bbf5463fd487bfd923f1e5f" Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.053279 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.053483 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 11:16:21 crc kubenswrapper[4619]: E0126 11:16:21.059002 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c15231189ba7eb4c45808f091e1144ff4e7a76eb7bbf5463fd487bfd923f1e5f\": container with ID starting with c15231189ba7eb4c45808f091e1144ff4e7a76eb7bbf5463fd487bfd923f1e5f not found: ID does not exist" containerID="c15231189ba7eb4c45808f091e1144ff4e7a76eb7bbf5463fd487bfd923f1e5f" Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.059099 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c15231189ba7eb4c45808f091e1144ff4e7a76eb7bbf5463fd487bfd923f1e5f"} err="failed to get container status \"c15231189ba7eb4c45808f091e1144ff4e7a76eb7bbf5463fd487bfd923f1e5f\": rpc error: code = NotFound desc = could not find container \"c15231189ba7eb4c45808f091e1144ff4e7a76eb7bbf5463fd487bfd923f1e5f\": container with ID starting with c15231189ba7eb4c45808f091e1144ff4e7a76eb7bbf5463fd487bfd923f1e5f not found: ID does not exist" Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.130380 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gkpt\" (UniqueName: \"kubernetes.io/projected/5755883f-06f0-4bf0-888d-2742d71ddf6c-kube-api-access-9gkpt\") pod \"nova-metadata-0\" (UID: \"5755883f-06f0-4bf0-888d-2742d71ddf6c\") " pod="openstack/nova-metadata-0" Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.130439 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5755883f-06f0-4bf0-888d-2742d71ddf6c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5755883f-06f0-4bf0-888d-2742d71ddf6c\") " pod="openstack/nova-metadata-0" Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.130511 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5755883f-06f0-4bf0-888d-2742d71ddf6c-config-data\") pod \"nova-metadata-0\" (UID: \"5755883f-06f0-4bf0-888d-2742d71ddf6c\") " pod="openstack/nova-metadata-0" Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.130534 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5755883f-06f0-4bf0-888d-2742d71ddf6c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5755883f-06f0-4bf0-888d-2742d71ddf6c\") " pod="openstack/nova-metadata-0" Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.130581 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5755883f-06f0-4bf0-888d-2742d71ddf6c-logs\") pod \"nova-metadata-0\" (UID: \"5755883f-06f0-4bf0-888d-2742d71ddf6c\") " pod="openstack/nova-metadata-0" Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.232082 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gkpt\" (UniqueName: \"kubernetes.io/projected/5755883f-06f0-4bf0-888d-2742d71ddf6c-kube-api-access-9gkpt\") pod \"nova-metadata-0\" (UID: \"5755883f-06f0-4bf0-888d-2742d71ddf6c\") " pod="openstack/nova-metadata-0" Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.232138 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5755883f-06f0-4bf0-888d-2742d71ddf6c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5755883f-06f0-4bf0-888d-2742d71ddf6c\") " pod="openstack/nova-metadata-0" Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.232207 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5755883f-06f0-4bf0-888d-2742d71ddf6c-config-data\") pod \"nova-metadata-0\" (UID: \"5755883f-06f0-4bf0-888d-2742d71ddf6c\") " pod="openstack/nova-metadata-0" Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.232224 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5755883f-06f0-4bf0-888d-2742d71ddf6c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5755883f-06f0-4bf0-888d-2742d71ddf6c\") " pod="openstack/nova-metadata-0" Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.232265 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5755883f-06f0-4bf0-888d-2742d71ddf6c-logs\") pod \"nova-metadata-0\" (UID: \"5755883f-06f0-4bf0-888d-2742d71ddf6c\") " pod="openstack/nova-metadata-0" Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.233127 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5755883f-06f0-4bf0-888d-2742d71ddf6c-logs\") pod \"nova-metadata-0\" (UID: \"5755883f-06f0-4bf0-888d-2742d71ddf6c\") " pod="openstack/nova-metadata-0" Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.237767 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5755883f-06f0-4bf0-888d-2742d71ddf6c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5755883f-06f0-4bf0-888d-2742d71ddf6c\") " pod="openstack/nova-metadata-0" Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.238244 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5755883f-06f0-4bf0-888d-2742d71ddf6c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5755883f-06f0-4bf0-888d-2742d71ddf6c\") " pod="openstack/nova-metadata-0" Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.239466 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5755883f-06f0-4bf0-888d-2742d71ddf6c-config-data\") pod \"nova-metadata-0\" (UID: \"5755883f-06f0-4bf0-888d-2742d71ddf6c\") " pod="openstack/nova-metadata-0" Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.252150 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gkpt\" (UniqueName: \"kubernetes.io/projected/5755883f-06f0-4bf0-888d-2742d71ddf6c-kube-api-access-9gkpt\") pod \"nova-metadata-0\" (UID: \"5755883f-06f0-4bf0-888d-2742d71ddf6c\") " pod="openstack/nova-metadata-0" Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.279016 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eb01a17-aaf3-41e3-b9e9-bca348d41a5e" path="/var/lib/kubelet/pods/9eb01a17-aaf3-41e3-b9e9-bca348d41a5e/volumes" Jan 26 11:16:21 crc kubenswrapper[4619]: I0126 11:16:21.364540 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 11:16:22 crc kubenswrapper[4619]: I0126 11:16:21.829317 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:16:22 crc kubenswrapper[4619]: I0126 11:16:21.966989 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5755883f-06f0-4bf0-888d-2742d71ddf6c","Type":"ContainerStarted","Data":"b5915d89fc2e9926c6e6d28dbda58a0ddeff7384ae8fc970c9cc704b9c498ce4"} Jan 26 11:16:22 crc kubenswrapper[4619]: I0126 11:16:22.982859 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5755883f-06f0-4bf0-888d-2742d71ddf6c","Type":"ContainerStarted","Data":"7c2ff8c91bdb418832c7a815c7c854358b99ffd72f2a7fc2f1138c0f5ea45ba3"} Jan 26 11:16:22 crc kubenswrapper[4619]: I0126 11:16:22.983130 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5755883f-06f0-4bf0-888d-2742d71ddf6c","Type":"ContainerStarted","Data":"ad1121c402d69cfafeb795a0778f9aba4608f81975f4b99b8d5a09b9e4559932"} Jan 26 11:16:23 crc kubenswrapper[4619]: I0126 11:16:23.019327 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.019297641 podStartE2EDuration="3.019297641s" podCreationTimestamp="2026-01-26 11:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:16:23.001154277 +0000 UTC m=+1282.035195043" watchObservedRunningTime="2026-01-26 11:16:23.019297641 +0000 UTC m=+1282.053338397" Jan 26 11:16:24 crc kubenswrapper[4619]: I0126 11:16:24.324539 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 11:16:26 crc kubenswrapper[4619]: I0126 11:16:26.366044 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 11:16:26 crc kubenswrapper[4619]: I0126 11:16:26.366427 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 11:16:28 crc kubenswrapper[4619]: I0126 11:16:28.321037 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 11:16:28 crc kubenswrapper[4619]: I0126 11:16:28.321103 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 11:16:29 crc kubenswrapper[4619]: I0126 11:16:29.324313 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 11:16:29 crc kubenswrapper[4619]: I0126 11:16:29.334753 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a99ba972-f513-421c-b25d-c8ecbc095c0f" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 11:16:29 crc kubenswrapper[4619]: I0126 11:16:29.334796 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a99ba972-f513-421c-b25d-c8ecbc095c0f" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 11:16:29 crc kubenswrapper[4619]: I0126 11:16:29.367581 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 11:16:30 crc kubenswrapper[4619]: I0126 11:16:30.100113 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 11:16:31 crc kubenswrapper[4619]: I0126 11:16:31.366312 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 11:16:31 crc kubenswrapper[4619]: I0126 11:16:31.366669 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 11:16:32 crc kubenswrapper[4619]: I0126 11:16:32.388882 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5755883f-06f0-4bf0-888d-2742d71ddf6c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 11:16:32 crc kubenswrapper[4619]: I0126 11:16:32.388957 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5755883f-06f0-4bf0-888d-2742d71ddf6c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 11:16:36 crc kubenswrapper[4619]: I0126 11:16:36.146789 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 11:16:38 crc kubenswrapper[4619]: I0126 11:16:38.326839 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 11:16:38 crc kubenswrapper[4619]: I0126 11:16:38.328673 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 11:16:38 crc kubenswrapper[4619]: I0126 11:16:38.329677 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 11:16:38 crc kubenswrapper[4619]: I0126 11:16:38.335588 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 11:16:39 crc kubenswrapper[4619]: I0126 11:16:39.172191 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 11:16:39 crc kubenswrapper[4619]: I0126 11:16:39.183297 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 11:16:40 crc kubenswrapper[4619]: I0126 11:16:40.138604 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 11:16:40 crc kubenswrapper[4619]: I0126 11:16:40.141282 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="d4e064ab-47fc-497d-b783-9debc84b2c7a" containerName="kube-state-metrics" containerID="cri-o://aa424e94465b032123b894835ea42cd277602fea5bb6170cdb822850e8689f88" gracePeriod=30 Jan 26 11:16:40 crc kubenswrapper[4619]: I0126 11:16:40.584074 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 11:16:40 crc kubenswrapper[4619]: I0126 11:16:40.775462 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbr4f\" (UniqueName: \"kubernetes.io/projected/d4e064ab-47fc-497d-b783-9debc84b2c7a-kube-api-access-lbr4f\") pod \"d4e064ab-47fc-497d-b783-9debc84b2c7a\" (UID: \"d4e064ab-47fc-497d-b783-9debc84b2c7a\") " Jan 26 11:16:40 crc kubenswrapper[4619]: I0126 11:16:40.782988 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4e064ab-47fc-497d-b783-9debc84b2c7a-kube-api-access-lbr4f" (OuterVolumeSpecName: "kube-api-access-lbr4f") pod "d4e064ab-47fc-497d-b783-9debc84b2c7a" (UID: "d4e064ab-47fc-497d-b783-9debc84b2c7a"). InnerVolumeSpecName "kube-api-access-lbr4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:16:40 crc kubenswrapper[4619]: I0126 11:16:40.877807 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbr4f\" (UniqueName: \"kubernetes.io/projected/d4e064ab-47fc-497d-b783-9debc84b2c7a-kube-api-access-lbr4f\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.216954 4619 generic.go:334] "Generic (PLEG): container finished" podID="d4e064ab-47fc-497d-b783-9debc84b2c7a" containerID="aa424e94465b032123b894835ea42cd277602fea5bb6170cdb822850e8689f88" exitCode=2 Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.217906 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.217967 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d4e064ab-47fc-497d-b783-9debc84b2c7a","Type":"ContainerDied","Data":"aa424e94465b032123b894835ea42cd277602fea5bb6170cdb822850e8689f88"} Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.218028 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d4e064ab-47fc-497d-b783-9debc84b2c7a","Type":"ContainerDied","Data":"d9fc65d2f614fa37d0ba46856ab1e36daa6456ec8ef07300a581af861cc9224e"} Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.218071 4619 scope.go:117] "RemoveContainer" containerID="aa424e94465b032123b894835ea42cd277602fea5bb6170cdb822850e8689f88" Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.273240 4619 scope.go:117] "RemoveContainer" containerID="aa424e94465b032123b894835ea42cd277602fea5bb6170cdb822850e8689f88" Jan 26 11:16:41 crc kubenswrapper[4619]: E0126 11:16:41.275430 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa424e94465b032123b894835ea42cd277602fea5bb6170cdb822850e8689f88\": container with ID starting with aa424e94465b032123b894835ea42cd277602fea5bb6170cdb822850e8689f88 not found: ID does not exist" containerID="aa424e94465b032123b894835ea42cd277602fea5bb6170cdb822850e8689f88" Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.275469 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa424e94465b032123b894835ea42cd277602fea5bb6170cdb822850e8689f88"} err="failed to get container status \"aa424e94465b032123b894835ea42cd277602fea5bb6170cdb822850e8689f88\": rpc error: code = NotFound desc = could not find container \"aa424e94465b032123b894835ea42cd277602fea5bb6170cdb822850e8689f88\": container with ID starting with aa424e94465b032123b894835ea42cd277602fea5bb6170cdb822850e8689f88 not found: ID does not exist" Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.316985 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.319518 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.332890 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 11:16:41 crc kubenswrapper[4619]: E0126 11:16:41.333497 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4e064ab-47fc-497d-b783-9debc84b2c7a" containerName="kube-state-metrics" Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.333515 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4e064ab-47fc-497d-b783-9debc84b2c7a" containerName="kube-state-metrics" Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.333878 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4e064ab-47fc-497d-b783-9debc84b2c7a" containerName="kube-state-metrics" Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.334670 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.366964 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.367825 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.370101 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.404908 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.405874 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.422548 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsz2b\" (UniqueName: \"kubernetes.io/projected/8f4bc98f-79c3-4192-973d-32d8df967077-kube-api-access-qsz2b\") pod \"kube-state-metrics-0\" (UID: \"8f4bc98f-79c3-4192-973d-32d8df967077\") " pod="openstack/kube-state-metrics-0" Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.422708 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f4bc98f-79c3-4192-973d-32d8df967077-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8f4bc98f-79c3-4192-973d-32d8df967077\") " pod="openstack/kube-state-metrics-0" Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.422821 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f4bc98f-79c3-4192-973d-32d8df967077-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8f4bc98f-79c3-4192-973d-32d8df967077\") " pod="openstack/kube-state-metrics-0" Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.422940 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8f4bc98f-79c3-4192-973d-32d8df967077-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8f4bc98f-79c3-4192-973d-32d8df967077\") " pod="openstack/kube-state-metrics-0" Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.432954 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.432999 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.524905 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f4bc98f-79c3-4192-973d-32d8df967077-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8f4bc98f-79c3-4192-973d-32d8df967077\") " pod="openstack/kube-state-metrics-0" Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.525038 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f4bc98f-79c3-4192-973d-32d8df967077-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8f4bc98f-79c3-4192-973d-32d8df967077\") " pod="openstack/kube-state-metrics-0" Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.525071 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8f4bc98f-79c3-4192-973d-32d8df967077-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8f4bc98f-79c3-4192-973d-32d8df967077\") " pod="openstack/kube-state-metrics-0" Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.525098 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsz2b\" (UniqueName: \"kubernetes.io/projected/8f4bc98f-79c3-4192-973d-32d8df967077-kube-api-access-qsz2b\") pod \"kube-state-metrics-0\" (UID: \"8f4bc98f-79c3-4192-973d-32d8df967077\") " pod="openstack/kube-state-metrics-0" Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.535246 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8f4bc98f-79c3-4192-973d-32d8df967077-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8f4bc98f-79c3-4192-973d-32d8df967077\") " pod="openstack/kube-state-metrics-0" Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.535351 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f4bc98f-79c3-4192-973d-32d8df967077-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8f4bc98f-79c3-4192-973d-32d8df967077\") " pod="openstack/kube-state-metrics-0" Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.538403 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f4bc98f-79c3-4192-973d-32d8df967077-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8f4bc98f-79c3-4192-973d-32d8df967077\") " pod="openstack/kube-state-metrics-0" Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.550226 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsz2b\" (UniqueName: \"kubernetes.io/projected/8f4bc98f-79c3-4192-973d-32d8df967077-kube-api-access-qsz2b\") pod \"kube-state-metrics-0\" (UID: \"8f4bc98f-79c3-4192-973d-32d8df967077\") " pod="openstack/kube-state-metrics-0" Jan 26 11:16:41 crc kubenswrapper[4619]: I0126 11:16:41.697347 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 11:16:42 crc kubenswrapper[4619]: I0126 11:16:42.107373 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:16:42 crc kubenswrapper[4619]: I0126 11:16:42.107920 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="372ae209-527d-4911-a2fe-c44eb520a653" containerName="sg-core" containerID="cri-o://5fa58cb4e5aeed6c1c31f65d88e5a8a163d95bf0529590254623d7c67b0bd0ec" gracePeriod=30 Jan 26 11:16:42 crc kubenswrapper[4619]: I0126 11:16:42.107935 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="372ae209-527d-4911-a2fe-c44eb520a653" containerName="proxy-httpd" containerID="cri-o://22a224241875082210f0fe725431e0c6a6e1c1bb19653e04c7c90d1daa9bbfaf" gracePeriod=30 Jan 26 11:16:42 crc kubenswrapper[4619]: I0126 11:16:42.107989 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="372ae209-527d-4911-a2fe-c44eb520a653" containerName="ceilometer-notification-agent" containerID="cri-o://a81462f8cbff6adf95eaf21c682d76cdc8b617f3e7000a4a291ff18ccb6e7760" gracePeriod=30 Jan 26 11:16:42 crc kubenswrapper[4619]: I0126 11:16:42.107872 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="372ae209-527d-4911-a2fe-c44eb520a653" containerName="ceilometer-central-agent" containerID="cri-o://cbbf47c2ca2d14146a5cfb19cbea36d33238ed8fc2e42136fad4b89cc0ccb455" gracePeriod=30 Jan 26 11:16:42 crc kubenswrapper[4619]: I0126 11:16:42.155187 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 11:16:42 crc kubenswrapper[4619]: I0126 11:16:42.234859 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8f4bc98f-79c3-4192-973d-32d8df967077","Type":"ContainerStarted","Data":"1b9433b89a20393b447c46c7b8feb8808d43bbbbd4b9b124ab2dadeea10c6867"} Jan 26 11:16:43 crc kubenswrapper[4619]: I0126 11:16:43.249125 4619 generic.go:334] "Generic (PLEG): container finished" podID="372ae209-527d-4911-a2fe-c44eb520a653" containerID="22a224241875082210f0fe725431e0c6a6e1c1bb19653e04c7c90d1daa9bbfaf" exitCode=0 Jan 26 11:16:43 crc kubenswrapper[4619]: I0126 11:16:43.249433 4619 generic.go:334] "Generic (PLEG): container finished" podID="372ae209-527d-4911-a2fe-c44eb520a653" containerID="5fa58cb4e5aeed6c1c31f65d88e5a8a163d95bf0529590254623d7c67b0bd0ec" exitCode=2 Jan 26 11:16:43 crc kubenswrapper[4619]: I0126 11:16:43.249444 4619 generic.go:334] "Generic (PLEG): container finished" podID="372ae209-527d-4911-a2fe-c44eb520a653" containerID="cbbf47c2ca2d14146a5cfb19cbea36d33238ed8fc2e42136fad4b89cc0ccb455" exitCode=0 Jan 26 11:16:43 crc kubenswrapper[4619]: I0126 11:16:43.249198 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"372ae209-527d-4911-a2fe-c44eb520a653","Type":"ContainerDied","Data":"22a224241875082210f0fe725431e0c6a6e1c1bb19653e04c7c90d1daa9bbfaf"} Jan 26 11:16:43 crc kubenswrapper[4619]: I0126 11:16:43.249507 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"372ae209-527d-4911-a2fe-c44eb520a653","Type":"ContainerDied","Data":"5fa58cb4e5aeed6c1c31f65d88e5a8a163d95bf0529590254623d7c67b0bd0ec"} Jan 26 11:16:43 crc kubenswrapper[4619]: I0126 11:16:43.249523 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"372ae209-527d-4911-a2fe-c44eb520a653","Type":"ContainerDied","Data":"cbbf47c2ca2d14146a5cfb19cbea36d33238ed8fc2e42136fad4b89cc0ccb455"} Jan 26 11:16:43 crc kubenswrapper[4619]: I0126 11:16:43.252163 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8f4bc98f-79c3-4192-973d-32d8df967077","Type":"ContainerStarted","Data":"f41b1ae744cd3b0bfc9adf0efc15b47caafecd76bd895733103946ab05433d5f"} Jan 26 11:16:43 crc kubenswrapper[4619]: I0126 11:16:43.252280 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 26 11:16:43 crc kubenswrapper[4619]: I0126 11:16:43.273238 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4e064ab-47fc-497d-b783-9debc84b2c7a" path="/var/lib/kubelet/pods/d4e064ab-47fc-497d-b783-9debc84b2c7a/volumes" Jan 26 11:16:43 crc kubenswrapper[4619]: I0126 11:16:43.280788 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.908737745 podStartE2EDuration="2.2807722s" podCreationTimestamp="2026-01-26 11:16:41 +0000 UTC" firstStartedPulling="2026-01-26 11:16:42.168066554 +0000 UTC m=+1301.202107270" lastFinishedPulling="2026-01-26 11:16:42.540100999 +0000 UTC m=+1301.574141725" observedRunningTime="2026-01-26 11:16:43.277562605 +0000 UTC m=+1302.311603321" watchObservedRunningTime="2026-01-26 11:16:43.2807722 +0000 UTC m=+1302.314812916" Jan 26 11:16:44 crc kubenswrapper[4619]: I0126 11:16:44.234500 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:16:44 crc kubenswrapper[4619]: I0126 11:16:44.235071 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:16:44 crc kubenswrapper[4619]: I0126 11:16:44.876429 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:16:44 crc kubenswrapper[4619]: I0126 11:16:44.997032 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/372ae209-527d-4911-a2fe-c44eb520a653-log-httpd\") pod \"372ae209-527d-4911-a2fe-c44eb520a653\" (UID: \"372ae209-527d-4911-a2fe-c44eb520a653\") " Jan 26 11:16:44 crc kubenswrapper[4619]: I0126 11:16:44.997155 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qggm2\" (UniqueName: \"kubernetes.io/projected/372ae209-527d-4911-a2fe-c44eb520a653-kube-api-access-qggm2\") pod \"372ae209-527d-4911-a2fe-c44eb520a653\" (UID: \"372ae209-527d-4911-a2fe-c44eb520a653\") " Jan 26 11:16:44 crc kubenswrapper[4619]: I0126 11:16:44.997246 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/372ae209-527d-4911-a2fe-c44eb520a653-run-httpd\") pod \"372ae209-527d-4911-a2fe-c44eb520a653\" (UID: \"372ae209-527d-4911-a2fe-c44eb520a653\") " Jan 26 11:16:44 crc kubenswrapper[4619]: I0126 11:16:44.997285 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/372ae209-527d-4911-a2fe-c44eb520a653-scripts\") pod \"372ae209-527d-4911-a2fe-c44eb520a653\" (UID: \"372ae209-527d-4911-a2fe-c44eb520a653\") " Jan 26 11:16:44 crc kubenswrapper[4619]: I0126 11:16:44.997382 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372ae209-527d-4911-a2fe-c44eb520a653-combined-ca-bundle\") pod \"372ae209-527d-4911-a2fe-c44eb520a653\" (UID: \"372ae209-527d-4911-a2fe-c44eb520a653\") " Jan 26 11:16:44 crc kubenswrapper[4619]: I0126 11:16:44.997463 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/372ae209-527d-4911-a2fe-c44eb520a653-sg-core-conf-yaml\") pod \"372ae209-527d-4911-a2fe-c44eb520a653\" (UID: \"372ae209-527d-4911-a2fe-c44eb520a653\") " Jan 26 11:16:44 crc kubenswrapper[4619]: I0126 11:16:44.997535 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/372ae209-527d-4911-a2fe-c44eb520a653-config-data\") pod \"372ae209-527d-4911-a2fe-c44eb520a653\" (UID: \"372ae209-527d-4911-a2fe-c44eb520a653\") " Jan 26 11:16:44 crc kubenswrapper[4619]: I0126 11:16:44.998090 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/372ae209-527d-4911-a2fe-c44eb520a653-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "372ae209-527d-4911-a2fe-c44eb520a653" (UID: "372ae209-527d-4911-a2fe-c44eb520a653"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:44.998404 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/372ae209-527d-4911-a2fe-c44eb520a653-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "372ae209-527d-4911-a2fe-c44eb520a653" (UID: "372ae209-527d-4911-a2fe-c44eb520a653"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.014450 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/372ae209-527d-4911-a2fe-c44eb520a653-scripts" (OuterVolumeSpecName: "scripts") pod "372ae209-527d-4911-a2fe-c44eb520a653" (UID: "372ae209-527d-4911-a2fe-c44eb520a653"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.016835 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/372ae209-527d-4911-a2fe-c44eb520a653-kube-api-access-qggm2" (OuterVolumeSpecName: "kube-api-access-qggm2") pod "372ae209-527d-4911-a2fe-c44eb520a653" (UID: "372ae209-527d-4911-a2fe-c44eb520a653"). InnerVolumeSpecName "kube-api-access-qggm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.027505 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/372ae209-527d-4911-a2fe-c44eb520a653-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "372ae209-527d-4911-a2fe-c44eb520a653" (UID: "372ae209-527d-4911-a2fe-c44eb520a653"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.078960 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/372ae209-527d-4911-a2fe-c44eb520a653-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "372ae209-527d-4911-a2fe-c44eb520a653" (UID: "372ae209-527d-4911-a2fe-c44eb520a653"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.100287 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372ae209-527d-4911-a2fe-c44eb520a653-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.100476 4619 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/372ae209-527d-4911-a2fe-c44eb520a653-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.100552 4619 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/372ae209-527d-4911-a2fe-c44eb520a653-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.100713 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qggm2\" (UniqueName: \"kubernetes.io/projected/372ae209-527d-4911-a2fe-c44eb520a653-kube-api-access-qggm2\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.100877 4619 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/372ae209-527d-4911-a2fe-c44eb520a653-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.100952 4619 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/372ae209-527d-4911-a2fe-c44eb520a653-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.129831 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/372ae209-527d-4911-a2fe-c44eb520a653-config-data" (OuterVolumeSpecName: "config-data") pod "372ae209-527d-4911-a2fe-c44eb520a653" (UID: "372ae209-527d-4911-a2fe-c44eb520a653"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.204398 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/372ae209-527d-4911-a2fe-c44eb520a653-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.284275 4619 generic.go:334] "Generic (PLEG): container finished" podID="372ae209-527d-4911-a2fe-c44eb520a653" containerID="a81462f8cbff6adf95eaf21c682d76cdc8b617f3e7000a4a291ff18ccb6e7760" exitCode=0 Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.284327 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"372ae209-527d-4911-a2fe-c44eb520a653","Type":"ContainerDied","Data":"a81462f8cbff6adf95eaf21c682d76cdc8b617f3e7000a4a291ff18ccb6e7760"} Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.284361 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"372ae209-527d-4911-a2fe-c44eb520a653","Type":"ContainerDied","Data":"e32f5546a8ad81e0dcb319186f1c2be9cc32b726d11f2a7c6e51dc933a5ad254"} Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.284380 4619 scope.go:117] "RemoveContainer" containerID="22a224241875082210f0fe725431e0c6a6e1c1bb19653e04c7c90d1daa9bbfaf" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.284480 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.306367 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.314363 4619 scope.go:117] "RemoveContainer" containerID="5fa58cb4e5aeed6c1c31f65d88e5a8a163d95bf0529590254623d7c67b0bd0ec" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.315529 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.342823 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:16:45 crc kubenswrapper[4619]: E0126 11:16:45.343183 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372ae209-527d-4911-a2fe-c44eb520a653" containerName="sg-core" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.343201 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="372ae209-527d-4911-a2fe-c44eb520a653" containerName="sg-core" Jan 26 11:16:45 crc kubenswrapper[4619]: E0126 11:16:45.343231 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372ae209-527d-4911-a2fe-c44eb520a653" containerName="proxy-httpd" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.343237 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="372ae209-527d-4911-a2fe-c44eb520a653" containerName="proxy-httpd" Jan 26 11:16:45 crc kubenswrapper[4619]: E0126 11:16:45.343262 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372ae209-527d-4911-a2fe-c44eb520a653" containerName="ceilometer-notification-agent" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.343267 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="372ae209-527d-4911-a2fe-c44eb520a653" containerName="ceilometer-notification-agent" Jan 26 11:16:45 crc kubenswrapper[4619]: E0126 11:16:45.343281 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372ae209-527d-4911-a2fe-c44eb520a653" containerName="ceilometer-central-agent" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.343289 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="372ae209-527d-4911-a2fe-c44eb520a653" containerName="ceilometer-central-agent" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.343527 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="372ae209-527d-4911-a2fe-c44eb520a653" containerName="ceilometer-notification-agent" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.343549 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="372ae209-527d-4911-a2fe-c44eb520a653" containerName="ceilometer-central-agent" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.343570 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="372ae209-527d-4911-a2fe-c44eb520a653" containerName="proxy-httpd" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.343581 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="372ae209-527d-4911-a2fe-c44eb520a653" containerName="sg-core" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.345433 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.349207 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.349481 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.349598 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.352664 4619 scope.go:117] "RemoveContainer" containerID="a81462f8cbff6adf95eaf21c682d76cdc8b617f3e7000a4a291ff18ccb6e7760" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.360488 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.401903 4619 scope.go:117] "RemoveContainer" containerID="cbbf47c2ca2d14146a5cfb19cbea36d33238ed8fc2e42136fad4b89cc0ccb455" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.409934 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c931ebbd-0d84-4a52-9672-d62698618f7f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c931ebbd-0d84-4a52-9672-d62698618f7f\") " pod="openstack/ceilometer-0" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.409980 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c931ebbd-0d84-4a52-9672-d62698618f7f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c931ebbd-0d84-4a52-9672-d62698618f7f\") " pod="openstack/ceilometer-0" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.410123 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf6kn\" (UniqueName: \"kubernetes.io/projected/c931ebbd-0d84-4a52-9672-d62698618f7f-kube-api-access-tf6kn\") pod \"ceilometer-0\" (UID: \"c931ebbd-0d84-4a52-9672-d62698618f7f\") " pod="openstack/ceilometer-0" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.410282 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c931ebbd-0d84-4a52-9672-d62698618f7f-run-httpd\") pod \"ceilometer-0\" (UID: \"c931ebbd-0d84-4a52-9672-d62698618f7f\") " pod="openstack/ceilometer-0" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.410309 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c931ebbd-0d84-4a52-9672-d62698618f7f-log-httpd\") pod \"ceilometer-0\" (UID: \"c931ebbd-0d84-4a52-9672-d62698618f7f\") " pod="openstack/ceilometer-0" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.410498 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c931ebbd-0d84-4a52-9672-d62698618f7f-config-data\") pod \"ceilometer-0\" (UID: \"c931ebbd-0d84-4a52-9672-d62698618f7f\") " pod="openstack/ceilometer-0" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.410609 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c931ebbd-0d84-4a52-9672-d62698618f7f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c931ebbd-0d84-4a52-9672-d62698618f7f\") " pod="openstack/ceilometer-0" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.410665 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c931ebbd-0d84-4a52-9672-d62698618f7f-scripts\") pod \"ceilometer-0\" (UID: \"c931ebbd-0d84-4a52-9672-d62698618f7f\") " pod="openstack/ceilometer-0" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.421713 4619 scope.go:117] "RemoveContainer" containerID="22a224241875082210f0fe725431e0c6a6e1c1bb19653e04c7c90d1daa9bbfaf" Jan 26 11:16:45 crc kubenswrapper[4619]: E0126 11:16:45.422296 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22a224241875082210f0fe725431e0c6a6e1c1bb19653e04c7c90d1daa9bbfaf\": container with ID starting with 22a224241875082210f0fe725431e0c6a6e1c1bb19653e04c7c90d1daa9bbfaf not found: ID does not exist" containerID="22a224241875082210f0fe725431e0c6a6e1c1bb19653e04c7c90d1daa9bbfaf" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.422346 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22a224241875082210f0fe725431e0c6a6e1c1bb19653e04c7c90d1daa9bbfaf"} err="failed to get container status \"22a224241875082210f0fe725431e0c6a6e1c1bb19653e04c7c90d1daa9bbfaf\": rpc error: code = NotFound desc = could not find container \"22a224241875082210f0fe725431e0c6a6e1c1bb19653e04c7c90d1daa9bbfaf\": container with ID starting with 22a224241875082210f0fe725431e0c6a6e1c1bb19653e04c7c90d1daa9bbfaf not found: ID does not exist" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.422381 4619 scope.go:117] "RemoveContainer" containerID="5fa58cb4e5aeed6c1c31f65d88e5a8a163d95bf0529590254623d7c67b0bd0ec" Jan 26 11:16:45 crc kubenswrapper[4619]: E0126 11:16:45.422861 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fa58cb4e5aeed6c1c31f65d88e5a8a163d95bf0529590254623d7c67b0bd0ec\": container with ID starting with 5fa58cb4e5aeed6c1c31f65d88e5a8a163d95bf0529590254623d7c67b0bd0ec not found: ID does not exist" containerID="5fa58cb4e5aeed6c1c31f65d88e5a8a163d95bf0529590254623d7c67b0bd0ec" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.422892 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fa58cb4e5aeed6c1c31f65d88e5a8a163d95bf0529590254623d7c67b0bd0ec"} err="failed to get container status \"5fa58cb4e5aeed6c1c31f65d88e5a8a163d95bf0529590254623d7c67b0bd0ec\": rpc error: code = NotFound desc = could not find container \"5fa58cb4e5aeed6c1c31f65d88e5a8a163d95bf0529590254623d7c67b0bd0ec\": container with ID starting with 5fa58cb4e5aeed6c1c31f65d88e5a8a163d95bf0529590254623d7c67b0bd0ec not found: ID does not exist" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.422912 4619 scope.go:117] "RemoveContainer" containerID="a81462f8cbff6adf95eaf21c682d76cdc8b617f3e7000a4a291ff18ccb6e7760" Jan 26 11:16:45 crc kubenswrapper[4619]: E0126 11:16:45.423165 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a81462f8cbff6adf95eaf21c682d76cdc8b617f3e7000a4a291ff18ccb6e7760\": container with ID starting with a81462f8cbff6adf95eaf21c682d76cdc8b617f3e7000a4a291ff18ccb6e7760 not found: ID does not exist" containerID="a81462f8cbff6adf95eaf21c682d76cdc8b617f3e7000a4a291ff18ccb6e7760" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.423201 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a81462f8cbff6adf95eaf21c682d76cdc8b617f3e7000a4a291ff18ccb6e7760"} err="failed to get container status \"a81462f8cbff6adf95eaf21c682d76cdc8b617f3e7000a4a291ff18ccb6e7760\": rpc error: code = NotFound desc = could not find container \"a81462f8cbff6adf95eaf21c682d76cdc8b617f3e7000a4a291ff18ccb6e7760\": container with ID starting with a81462f8cbff6adf95eaf21c682d76cdc8b617f3e7000a4a291ff18ccb6e7760 not found: ID does not exist" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.423221 4619 scope.go:117] "RemoveContainer" containerID="cbbf47c2ca2d14146a5cfb19cbea36d33238ed8fc2e42136fad4b89cc0ccb455" Jan 26 11:16:45 crc kubenswrapper[4619]: E0126 11:16:45.423551 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbbf47c2ca2d14146a5cfb19cbea36d33238ed8fc2e42136fad4b89cc0ccb455\": container with ID starting with cbbf47c2ca2d14146a5cfb19cbea36d33238ed8fc2e42136fad4b89cc0ccb455 not found: ID does not exist" containerID="cbbf47c2ca2d14146a5cfb19cbea36d33238ed8fc2e42136fad4b89cc0ccb455" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.423574 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbbf47c2ca2d14146a5cfb19cbea36d33238ed8fc2e42136fad4b89cc0ccb455"} err="failed to get container status \"cbbf47c2ca2d14146a5cfb19cbea36d33238ed8fc2e42136fad4b89cc0ccb455\": rpc error: code = NotFound desc = could not find container \"cbbf47c2ca2d14146a5cfb19cbea36d33238ed8fc2e42136fad4b89cc0ccb455\": container with ID starting with cbbf47c2ca2d14146a5cfb19cbea36d33238ed8fc2e42136fad4b89cc0ccb455 not found: ID does not exist" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.512492 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c931ebbd-0d84-4a52-9672-d62698618f7f-run-httpd\") pod \"ceilometer-0\" (UID: \"c931ebbd-0d84-4a52-9672-d62698618f7f\") " pod="openstack/ceilometer-0" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.512563 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c931ebbd-0d84-4a52-9672-d62698618f7f-log-httpd\") pod \"ceilometer-0\" (UID: \"c931ebbd-0d84-4a52-9672-d62698618f7f\") " pod="openstack/ceilometer-0" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.512756 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c931ebbd-0d84-4a52-9672-d62698618f7f-config-data\") pod \"ceilometer-0\" (UID: \"c931ebbd-0d84-4a52-9672-d62698618f7f\") " pod="openstack/ceilometer-0" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.512868 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c931ebbd-0d84-4a52-9672-d62698618f7f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c931ebbd-0d84-4a52-9672-d62698618f7f\") " pod="openstack/ceilometer-0" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.512976 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c931ebbd-0d84-4a52-9672-d62698618f7f-scripts\") pod \"ceilometer-0\" (UID: \"c931ebbd-0d84-4a52-9672-d62698618f7f\") " pod="openstack/ceilometer-0" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.513296 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c931ebbd-0d84-4a52-9672-d62698618f7f-log-httpd\") pod \"ceilometer-0\" (UID: \"c931ebbd-0d84-4a52-9672-d62698618f7f\") " pod="openstack/ceilometer-0" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.513072 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c931ebbd-0d84-4a52-9672-d62698618f7f-run-httpd\") pod \"ceilometer-0\" (UID: \"c931ebbd-0d84-4a52-9672-d62698618f7f\") " pod="openstack/ceilometer-0" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.513876 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c931ebbd-0d84-4a52-9672-d62698618f7f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c931ebbd-0d84-4a52-9672-d62698618f7f\") " pod="openstack/ceilometer-0" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.514523 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c931ebbd-0d84-4a52-9672-d62698618f7f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c931ebbd-0d84-4a52-9672-d62698618f7f\") " pod="openstack/ceilometer-0" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.514693 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf6kn\" (UniqueName: \"kubernetes.io/projected/c931ebbd-0d84-4a52-9672-d62698618f7f-kube-api-access-tf6kn\") pod \"ceilometer-0\" (UID: \"c931ebbd-0d84-4a52-9672-d62698618f7f\") " pod="openstack/ceilometer-0" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.517235 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c931ebbd-0d84-4a52-9672-d62698618f7f-scripts\") pod \"ceilometer-0\" (UID: \"c931ebbd-0d84-4a52-9672-d62698618f7f\") " pod="openstack/ceilometer-0" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.518059 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c931ebbd-0d84-4a52-9672-d62698618f7f-config-data\") pod \"ceilometer-0\" (UID: \"c931ebbd-0d84-4a52-9672-d62698618f7f\") " pod="openstack/ceilometer-0" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.518096 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c931ebbd-0d84-4a52-9672-d62698618f7f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c931ebbd-0d84-4a52-9672-d62698618f7f\") " pod="openstack/ceilometer-0" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.519172 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c931ebbd-0d84-4a52-9672-d62698618f7f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c931ebbd-0d84-4a52-9672-d62698618f7f\") " pod="openstack/ceilometer-0" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.526348 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c931ebbd-0d84-4a52-9672-d62698618f7f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c931ebbd-0d84-4a52-9672-d62698618f7f\") " pod="openstack/ceilometer-0" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.538095 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf6kn\" (UniqueName: \"kubernetes.io/projected/c931ebbd-0d84-4a52-9672-d62698618f7f-kube-api-access-tf6kn\") pod \"ceilometer-0\" (UID: \"c931ebbd-0d84-4a52-9672-d62698618f7f\") " pod="openstack/ceilometer-0" Jan 26 11:16:45 crc kubenswrapper[4619]: I0126 11:16:45.677872 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:16:46 crc kubenswrapper[4619]: W0126 11:16:46.121939 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc931ebbd_0d84_4a52_9672_d62698618f7f.slice/crio-3e9a0e2e220de173a86dbbf744b318cfaae102af651625fc80129e01b7ec4dc5 WatchSource:0}: Error finding container 3e9a0e2e220de173a86dbbf744b318cfaae102af651625fc80129e01b7ec4dc5: Status 404 returned error can't find the container with id 3e9a0e2e220de173a86dbbf744b318cfaae102af651625fc80129e01b7ec4dc5 Jan 26 11:16:46 crc kubenswrapper[4619]: I0126 11:16:46.132358 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:16:46 crc kubenswrapper[4619]: I0126 11:16:46.292972 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c931ebbd-0d84-4a52-9672-d62698618f7f","Type":"ContainerStarted","Data":"3e9a0e2e220de173a86dbbf744b318cfaae102af651625fc80129e01b7ec4dc5"} Jan 26 11:16:47 crc kubenswrapper[4619]: I0126 11:16:47.274889 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="372ae209-527d-4911-a2fe-c44eb520a653" path="/var/lib/kubelet/pods/372ae209-527d-4911-a2fe-c44eb520a653/volumes" Jan 26 11:16:47 crc kubenswrapper[4619]: I0126 11:16:47.304401 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c931ebbd-0d84-4a52-9672-d62698618f7f","Type":"ContainerStarted","Data":"a51ac480c224af8bb304892d125493c7ea359f2730c893467d6e5ce01dbfcdcf"} Jan 26 11:16:48 crc kubenswrapper[4619]: I0126 11:16:48.315130 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c931ebbd-0d84-4a52-9672-d62698618f7f","Type":"ContainerStarted","Data":"351a991dacf5d48d5248c98661e01af3b87080c920a2446a20a70518047dfff5"} Jan 26 11:16:48 crc kubenswrapper[4619]: I0126 11:16:48.315717 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c931ebbd-0d84-4a52-9672-d62698618f7f","Type":"ContainerStarted","Data":"193060e25c997f39850c1783b1a7bdfa41a7e1faf53df4fa83690183b136cac6"} Jan 26 11:16:49 crc kubenswrapper[4619]: I0126 11:16:49.520191 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 11:16:50 crc kubenswrapper[4619]: I0126 11:16:50.331348 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c931ebbd-0d84-4a52-9672-d62698618f7f","Type":"ContainerStarted","Data":"4f96ae394796835b3564c84cd744bb19741b4ca81bea4fa807f21473348957b7"} Jan 26 11:16:50 crc kubenswrapper[4619]: I0126 11:16:50.331520 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 11:16:50 crc kubenswrapper[4619]: I0126 11:16:50.354615 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 11:16:50 crc kubenswrapper[4619]: I0126 11:16:50.386812 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.104931289 podStartE2EDuration="5.386794995s" podCreationTimestamp="2026-01-26 11:16:45 +0000 UTC" firstStartedPulling="2026-01-26 11:16:46.124270695 +0000 UTC m=+1305.158311411" lastFinishedPulling="2026-01-26 11:16:49.406134401 +0000 UTC m=+1308.440175117" observedRunningTime="2026-01-26 11:16:50.381867483 +0000 UTC m=+1309.415908199" watchObservedRunningTime="2026-01-26 11:16:50.386794995 +0000 UTC m=+1309.420835711" Jan 26 11:16:51 crc kubenswrapper[4619]: I0126 11:16:51.714797 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 26 11:16:54 crc kubenswrapper[4619]: I0126 11:16:54.790338 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf" containerName="rabbitmq" containerID="cri-o://207f1c7a38d9e1cea727a7145e775450b7ed708088f47c26e5efc7b1c74309bd" gracePeriod=604795 Jan 26 11:16:55 crc kubenswrapper[4619]: I0126 11:16:55.693714 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="213a8fd2-1f05-4287-b7f2-dfcd18d94399" containerName="rabbitmq" containerID="cri-o://d71f3849dc2b9df3792a998c97680c403cc3c75bc399475d54373dd9d1c451c8" gracePeriod=604795 Jan 26 11:16:56 crc kubenswrapper[4619]: I0126 11:16:56.310036 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Jan 26 11:16:56 crc kubenswrapper[4619]: I0126 11:16:56.658673 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="213a8fd2-1f05-4287-b7f2-dfcd18d94399" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.408576 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.434354 4619 generic.go:334] "Generic (PLEG): container finished" podID="33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf" containerID="207f1c7a38d9e1cea727a7145e775450b7ed708088f47c26e5efc7b1c74309bd" exitCode=0 Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.434421 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf","Type":"ContainerDied","Data":"207f1c7a38d9e1cea727a7145e775450b7ed708088f47c26e5efc7b1c74309bd"} Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.434453 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf","Type":"ContainerDied","Data":"6ba862812bf14c204f81eb75ff540e85170362cc14ed22eb8ddfe667205d2c2f"} Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.434474 4619 scope.go:117] "RemoveContainer" containerID="207f1c7a38d9e1cea727a7145e775450b7ed708088f47c26e5efc7b1c74309bd" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.434663 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.435332 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-rabbitmq-tls\") pod \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.435376 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-erlang-cookie-secret\") pod \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.435423 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-rabbitmq-erlang-cookie\") pod \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.435446 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-rabbitmq-confd\") pod \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.435469 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-rabbitmq-plugins\") pod \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.435493 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-config-data\") pod \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.435513 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-server-conf\") pod \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.435539 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-pod-info\") pod \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.435588 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.435705 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-plugins-conf\") pod \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.435763 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cw2x5\" (UniqueName: \"kubernetes.io/projected/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-kube-api-access-cw2x5\") pod \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\" (UID: \"33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf\") " Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.440144 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf" (UID: "33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.444341 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf" (UID: "33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.448231 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf" (UID: "33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.458808 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-pod-info" (OuterVolumeSpecName: "pod-info") pod "33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf" (UID: "33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.470338 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-kube-api-access-cw2x5" (OuterVolumeSpecName: "kube-api-access-cw2x5") pod "33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf" (UID: "33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf"). InnerVolumeSpecName "kube-api-access-cw2x5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.472846 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf" (UID: "33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.472976 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf" (UID: "33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.473693 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf" (UID: "33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.487607 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-config-data" (OuterVolumeSpecName: "config-data") pod "33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf" (UID: "33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.537656 4619 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.537689 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cw2x5\" (UniqueName: \"kubernetes.io/projected/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-kube-api-access-cw2x5\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.537700 4619 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.537709 4619 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.537718 4619 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.537726 4619 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.537734 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.537742 4619 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.537767 4619 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.571396 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-server-conf" (OuterVolumeSpecName: "server-conf") pod "33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf" (UID: "33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.571560 4619 scope.go:117] "RemoveContainer" containerID="8bf18e3384d5a877bc084315a99d0bd326ced764710cfac705eae40f57cfe4f0" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.581665 4619 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.589817 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf" (UID: "33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.611418 4619 scope.go:117] "RemoveContainer" containerID="207f1c7a38d9e1cea727a7145e775450b7ed708088f47c26e5efc7b1c74309bd" Jan 26 11:17:01 crc kubenswrapper[4619]: E0126 11:17:01.611884 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"207f1c7a38d9e1cea727a7145e775450b7ed708088f47c26e5efc7b1c74309bd\": container with ID starting with 207f1c7a38d9e1cea727a7145e775450b7ed708088f47c26e5efc7b1c74309bd not found: ID does not exist" containerID="207f1c7a38d9e1cea727a7145e775450b7ed708088f47c26e5efc7b1c74309bd" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.611933 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"207f1c7a38d9e1cea727a7145e775450b7ed708088f47c26e5efc7b1c74309bd"} err="failed to get container status \"207f1c7a38d9e1cea727a7145e775450b7ed708088f47c26e5efc7b1c74309bd\": rpc error: code = NotFound desc = could not find container \"207f1c7a38d9e1cea727a7145e775450b7ed708088f47c26e5efc7b1c74309bd\": container with ID starting with 207f1c7a38d9e1cea727a7145e775450b7ed708088f47c26e5efc7b1c74309bd not found: ID does not exist" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.611962 4619 scope.go:117] "RemoveContainer" containerID="8bf18e3384d5a877bc084315a99d0bd326ced764710cfac705eae40f57cfe4f0" Jan 26 11:17:01 crc kubenswrapper[4619]: E0126 11:17:01.612200 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bf18e3384d5a877bc084315a99d0bd326ced764710cfac705eae40f57cfe4f0\": container with ID starting with 8bf18e3384d5a877bc084315a99d0bd326ced764710cfac705eae40f57cfe4f0 not found: ID does not exist" containerID="8bf18e3384d5a877bc084315a99d0bd326ced764710cfac705eae40f57cfe4f0" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.612221 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bf18e3384d5a877bc084315a99d0bd326ced764710cfac705eae40f57cfe4f0"} err="failed to get container status \"8bf18e3384d5a877bc084315a99d0bd326ced764710cfac705eae40f57cfe4f0\": rpc error: code = NotFound desc = could not find container \"8bf18e3384d5a877bc084315a99d0bd326ced764710cfac705eae40f57cfe4f0\": container with ID starting with 8bf18e3384d5a877bc084315a99d0bd326ced764710cfac705eae40f57cfe4f0 not found: ID does not exist" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.640689 4619 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.640725 4619 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.640734 4619 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.765396 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.772849 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.798647 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 11:17:01 crc kubenswrapper[4619]: E0126 11:17:01.798998 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf" containerName="rabbitmq" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.799014 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf" containerName="rabbitmq" Jan 26 11:17:01 crc kubenswrapper[4619]: E0126 11:17:01.799037 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf" containerName="setup-container" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.799044 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf" containerName="setup-container" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.799211 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf" containerName="rabbitmq" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.800037 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.805665 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.805747 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.805669 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-sl2kv" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.806053 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.805765 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.805681 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.805723 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.823356 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.963286 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.963348 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.963366 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8dhc\" (UniqueName: \"kubernetes.io/projected/4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85-kube-api-access-k8dhc\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.963391 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.963428 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.963452 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.963538 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.963556 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.963576 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.963596 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:01 crc kubenswrapper[4619]: I0126 11:17:01.963638 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85-config-data\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.078653 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.079079 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.080778 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.080839 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.080890 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.080940 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.081009 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85-config-data\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.081066 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.081144 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.081175 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8dhc\" (UniqueName: \"kubernetes.io/projected/4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85-kube-api-access-k8dhc\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.081225 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.084945 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.089354 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85-config-data\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.089758 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.090578 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.091394 4619 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.092974 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.093273 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.103589 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.140218 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.164753 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8dhc\" (UniqueName: \"kubernetes.io/projected/4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85-kube-api-access-k8dhc\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.187192 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.200602 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85\") " pod="openstack/rabbitmq-server-0" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.281281 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.386500 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/213a8fd2-1f05-4287-b7f2-dfcd18d94399-rabbitmq-tls\") pod \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.386604 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/213a8fd2-1f05-4287-b7f2-dfcd18d94399-plugins-conf\") pod \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.386658 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.386747 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/213a8fd2-1f05-4287-b7f2-dfcd18d94399-pod-info\") pod \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.386777 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/213a8fd2-1f05-4287-b7f2-dfcd18d94399-server-conf\") pod \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.386803 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/213a8fd2-1f05-4287-b7f2-dfcd18d94399-rabbitmq-plugins\") pod \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.386825 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/213a8fd2-1f05-4287-b7f2-dfcd18d94399-rabbitmq-erlang-cookie\") pod \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.386843 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/213a8fd2-1f05-4287-b7f2-dfcd18d94399-rabbitmq-confd\") pod \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.386864 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/213a8fd2-1f05-4287-b7f2-dfcd18d94399-erlang-cookie-secret\") pod \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.386890 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njt9b\" (UniqueName: \"kubernetes.io/projected/213a8fd2-1f05-4287-b7f2-dfcd18d94399-kube-api-access-njt9b\") pod \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.386905 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/213a8fd2-1f05-4287-b7f2-dfcd18d94399-config-data\") pod \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\" (UID: \"213a8fd2-1f05-4287-b7f2-dfcd18d94399\") " Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.387650 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/213a8fd2-1f05-4287-b7f2-dfcd18d94399-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "213a8fd2-1f05-4287-b7f2-dfcd18d94399" (UID: "213a8fd2-1f05-4287-b7f2-dfcd18d94399"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.387978 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/213a8fd2-1f05-4287-b7f2-dfcd18d94399-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "213a8fd2-1f05-4287-b7f2-dfcd18d94399" (UID: "213a8fd2-1f05-4287-b7f2-dfcd18d94399"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.391761 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/213a8fd2-1f05-4287-b7f2-dfcd18d94399-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "213a8fd2-1f05-4287-b7f2-dfcd18d94399" (UID: "213a8fd2-1f05-4287-b7f2-dfcd18d94399"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.392104 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/213a8fd2-1f05-4287-b7f2-dfcd18d94399-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "213a8fd2-1f05-4287-b7f2-dfcd18d94399" (UID: "213a8fd2-1f05-4287-b7f2-dfcd18d94399"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.413038 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "213a8fd2-1f05-4287-b7f2-dfcd18d94399" (UID: "213a8fd2-1f05-4287-b7f2-dfcd18d94399"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.413207 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/213a8fd2-1f05-4287-b7f2-dfcd18d94399-kube-api-access-njt9b" (OuterVolumeSpecName: "kube-api-access-njt9b") pod "213a8fd2-1f05-4287-b7f2-dfcd18d94399" (UID: "213a8fd2-1f05-4287-b7f2-dfcd18d94399"). InnerVolumeSpecName "kube-api-access-njt9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.414882 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/213a8fd2-1f05-4287-b7f2-dfcd18d94399-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "213a8fd2-1f05-4287-b7f2-dfcd18d94399" (UID: "213a8fd2-1f05-4287-b7f2-dfcd18d94399"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.418985 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.420302 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/213a8fd2-1f05-4287-b7f2-dfcd18d94399-pod-info" (OuterVolumeSpecName: "pod-info") pod "213a8fd2-1f05-4287-b7f2-dfcd18d94399" (UID: "213a8fd2-1f05-4287-b7f2-dfcd18d94399"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.440826 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/213a8fd2-1f05-4287-b7f2-dfcd18d94399-config-data" (OuterVolumeSpecName: "config-data") pod "213a8fd2-1f05-4287-b7f2-dfcd18d94399" (UID: "213a8fd2-1f05-4287-b7f2-dfcd18d94399"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.482541 4619 generic.go:334] "Generic (PLEG): container finished" podID="213a8fd2-1f05-4287-b7f2-dfcd18d94399" containerID="d71f3849dc2b9df3792a998c97680c403cc3c75bc399475d54373dd9d1c451c8" exitCode=0 Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.482730 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"213a8fd2-1f05-4287-b7f2-dfcd18d94399","Type":"ContainerDied","Data":"d71f3849dc2b9df3792a998c97680c403cc3c75bc399475d54373dd9d1c451c8"} Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.482771 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"213a8fd2-1f05-4287-b7f2-dfcd18d94399","Type":"ContainerDied","Data":"6e018d56cca18b5941a952e4672f515c39d0a81bd650fb26bc25d20a36d82632"} Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.482889 4619 scope.go:117] "RemoveContainer" containerID="d71f3849dc2b9df3792a998c97680c403cc3c75bc399475d54373dd9d1c451c8" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.483207 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.488941 4619 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/213a8fd2-1f05-4287-b7f2-dfcd18d94399-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.488965 4619 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/213a8fd2-1f05-4287-b7f2-dfcd18d94399-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.488975 4619 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/213a8fd2-1f05-4287-b7f2-dfcd18d94399-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.488983 4619 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/213a8fd2-1f05-4287-b7f2-dfcd18d94399-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.488991 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njt9b\" (UniqueName: \"kubernetes.io/projected/213a8fd2-1f05-4287-b7f2-dfcd18d94399-kube-api-access-njt9b\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.489002 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/213a8fd2-1f05-4287-b7f2-dfcd18d94399-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.489010 4619 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/213a8fd2-1f05-4287-b7f2-dfcd18d94399-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.489019 4619 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/213a8fd2-1f05-4287-b7f2-dfcd18d94399-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.489038 4619 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.535206 4619 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.542922 4619 scope.go:117] "RemoveContainer" containerID="1be2f241afcc7c99d6a50d16e5bdf2fc4f62796cdc929b249b68cf1ef5a4a22b" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.573790 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/213a8fd2-1f05-4287-b7f2-dfcd18d94399-server-conf" (OuterVolumeSpecName: "server-conf") pod "213a8fd2-1f05-4287-b7f2-dfcd18d94399" (UID: "213a8fd2-1f05-4287-b7f2-dfcd18d94399"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.605276 4619 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.605546 4619 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/213a8fd2-1f05-4287-b7f2-dfcd18d94399-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.605461 4619 scope.go:117] "RemoveContainer" containerID="d71f3849dc2b9df3792a998c97680c403cc3c75bc399475d54373dd9d1c451c8" Jan 26 11:17:02 crc kubenswrapper[4619]: E0126 11:17:02.610064 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d71f3849dc2b9df3792a998c97680c403cc3c75bc399475d54373dd9d1c451c8\": container with ID starting with d71f3849dc2b9df3792a998c97680c403cc3c75bc399475d54373dd9d1c451c8 not found: ID does not exist" containerID="d71f3849dc2b9df3792a998c97680c403cc3c75bc399475d54373dd9d1c451c8" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.610100 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d71f3849dc2b9df3792a998c97680c403cc3c75bc399475d54373dd9d1c451c8"} err="failed to get container status \"d71f3849dc2b9df3792a998c97680c403cc3c75bc399475d54373dd9d1c451c8\": rpc error: code = NotFound desc = could not find container \"d71f3849dc2b9df3792a998c97680c403cc3c75bc399475d54373dd9d1c451c8\": container with ID starting with d71f3849dc2b9df3792a998c97680c403cc3c75bc399475d54373dd9d1c451c8 not found: ID does not exist" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.610124 4619 scope.go:117] "RemoveContainer" containerID="1be2f241afcc7c99d6a50d16e5bdf2fc4f62796cdc929b249b68cf1ef5a4a22b" Jan 26 11:17:02 crc kubenswrapper[4619]: E0126 11:17:02.610369 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1be2f241afcc7c99d6a50d16e5bdf2fc4f62796cdc929b249b68cf1ef5a4a22b\": container with ID starting with 1be2f241afcc7c99d6a50d16e5bdf2fc4f62796cdc929b249b68cf1ef5a4a22b not found: ID does not exist" containerID="1be2f241afcc7c99d6a50d16e5bdf2fc4f62796cdc929b249b68cf1ef5a4a22b" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.610384 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1be2f241afcc7c99d6a50d16e5bdf2fc4f62796cdc929b249b68cf1ef5a4a22b"} err="failed to get container status \"1be2f241afcc7c99d6a50d16e5bdf2fc4f62796cdc929b249b68cf1ef5a4a22b\": rpc error: code = NotFound desc = could not find container \"1be2f241afcc7c99d6a50d16e5bdf2fc4f62796cdc929b249b68cf1ef5a4a22b\": container with ID starting with 1be2f241afcc7c99d6a50d16e5bdf2fc4f62796cdc929b249b68cf1ef5a4a22b not found: ID does not exist" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.668922 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/213a8fd2-1f05-4287-b7f2-dfcd18d94399-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "213a8fd2-1f05-4287-b7f2-dfcd18d94399" (UID: "213a8fd2-1f05-4287-b7f2-dfcd18d94399"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.712672 4619 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/213a8fd2-1f05-4287-b7f2-dfcd18d94399-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.834910 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.855738 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.872855 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 11:17:02 crc kubenswrapper[4619]: E0126 11:17:02.873379 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="213a8fd2-1f05-4287-b7f2-dfcd18d94399" containerName="setup-container" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.873393 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="213a8fd2-1f05-4287-b7f2-dfcd18d94399" containerName="setup-container" Jan 26 11:17:02 crc kubenswrapper[4619]: E0126 11:17:02.873417 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="213a8fd2-1f05-4287-b7f2-dfcd18d94399" containerName="rabbitmq" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.873424 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="213a8fd2-1f05-4287-b7f2-dfcd18d94399" containerName="rabbitmq" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.873768 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="213a8fd2-1f05-4287-b7f2-dfcd18d94399" containerName="rabbitmq" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.874730 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.878376 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.878438 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.878564 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-2pzqt" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.880391 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.893304 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.893545 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.893674 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 26 11:17:02 crc kubenswrapper[4619]: I0126 11:17:02.907348 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.017091 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f4190b4f-7c04-4c14-83b4-87e224fef035-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.017168 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.017189 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f4190b4f-7c04-4c14-83b4-87e224fef035-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.017206 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f4190b4f-7c04-4c14-83b4-87e224fef035-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.017240 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f4190b4f-7c04-4c14-83b4-87e224fef035-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.017263 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlrw8\" (UniqueName: \"kubernetes.io/projected/f4190b4f-7c04-4c14-83b4-87e224fef035-kube-api-access-rlrw8\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.017317 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f4190b4f-7c04-4c14-83b4-87e224fef035-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.017334 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f4190b4f-7c04-4c14-83b4-87e224fef035-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.017352 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f4190b4f-7c04-4c14-83b4-87e224fef035-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.017386 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f4190b4f-7c04-4c14-83b4-87e224fef035-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.017406 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f4190b4f-7c04-4c14-83b4-87e224fef035-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.081226 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.119298 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.119344 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f4190b4f-7c04-4c14-83b4-87e224fef035-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.119361 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f4190b4f-7c04-4c14-83b4-87e224fef035-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.119392 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f4190b4f-7c04-4c14-83b4-87e224fef035-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.119415 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlrw8\" (UniqueName: \"kubernetes.io/projected/f4190b4f-7c04-4c14-83b4-87e224fef035-kube-api-access-rlrw8\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.119465 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f4190b4f-7c04-4c14-83b4-87e224fef035-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.119503 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f4190b4f-7c04-4c14-83b4-87e224fef035-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.119523 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f4190b4f-7c04-4c14-83b4-87e224fef035-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.119557 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f4190b4f-7c04-4c14-83b4-87e224fef035-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.119583 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f4190b4f-7c04-4c14-83b4-87e224fef035-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.119630 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f4190b4f-7c04-4c14-83b4-87e224fef035-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.119679 4619 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.120355 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f4190b4f-7c04-4c14-83b4-87e224fef035-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.120853 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f4190b4f-7c04-4c14-83b4-87e224fef035-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.122134 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f4190b4f-7c04-4c14-83b4-87e224fef035-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.122181 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f4190b4f-7c04-4c14-83b4-87e224fef035-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.122938 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f4190b4f-7c04-4c14-83b4-87e224fef035-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.122984 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f4190b4f-7c04-4c14-83b4-87e224fef035-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.123669 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f4190b4f-7c04-4c14-83b4-87e224fef035-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.127024 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f4190b4f-7c04-4c14-83b4-87e224fef035-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.131090 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f4190b4f-7c04-4c14-83b4-87e224fef035-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.145457 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlrw8\" (UniqueName: \"kubernetes.io/projected/f4190b4f-7c04-4c14-83b4-87e224fef035-kube-api-access-rlrw8\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.161359 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f4190b4f-7c04-4c14-83b4-87e224fef035\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.212108 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.277895 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="213a8fd2-1f05-4287-b7f2-dfcd18d94399" path="/var/lib/kubelet/pods/213a8fd2-1f05-4287-b7f2-dfcd18d94399/volumes" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.278600 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf" path="/var/lib/kubelet/pods/33fc7d1e-2f71-40fc-ab04-a3d88fc1f3bf/volumes" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.495530 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85","Type":"ContainerStarted","Data":"ffbbac35b7a24b56dbcd090840bb86263050d1f409b0e721e7e8a39022965787"} Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.523444 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mdmqs"] Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.525118 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.529799 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.530019 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-mdmqs\" (UID: \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.530156 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-mdmqs\" (UID: \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.530226 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-config\") pod \"dnsmasq-dns-79bd4cc8c9-mdmqs\" (UID: \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.530248 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-mdmqs\" (UID: \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.530309 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7prnd\" (UniqueName: \"kubernetes.io/projected/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-kube-api-access-7prnd\") pod \"dnsmasq-dns-79bd4cc8c9-mdmqs\" (UID: \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.530351 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-mdmqs\" (UID: \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.530453 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-mdmqs\" (UID: \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.549534 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mdmqs"] Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.631720 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-mdmqs\" (UID: \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.631787 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-mdmqs\" (UID: \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.631817 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-config\") pod \"dnsmasq-dns-79bd4cc8c9-mdmqs\" (UID: \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.631835 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-mdmqs\" (UID: \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.631868 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7prnd\" (UniqueName: \"kubernetes.io/projected/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-kube-api-access-7prnd\") pod \"dnsmasq-dns-79bd4cc8c9-mdmqs\" (UID: \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.631893 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-mdmqs\" (UID: \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.631937 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-mdmqs\" (UID: \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.632651 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-mdmqs\" (UID: \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.633790 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-config\") pod \"dnsmasq-dns-79bd4cc8c9-mdmqs\" (UID: \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.633799 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-mdmqs\" (UID: \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.633935 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-mdmqs\" (UID: \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.633991 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-mdmqs\" (UID: \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.634369 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-mdmqs\" (UID: \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.650259 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7prnd\" (UniqueName: \"kubernetes.io/projected/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-kube-api-access-7prnd\") pod \"dnsmasq-dns-79bd4cc8c9-mdmqs\" (UID: \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.670909 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 11:17:03 crc kubenswrapper[4619]: W0126 11:17:03.675992 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4190b4f_7c04_4c14_83b4_87e224fef035.slice/crio-46fef73b51bca33714e1927ef174e916174bb3d1e361f8d568c383f360064301 WatchSource:0}: Error finding container 46fef73b51bca33714e1927ef174e916174bb3d1e361f8d568c383f360064301: Status 404 returned error can't find the container with id 46fef73b51bca33714e1927ef174e916174bb3d1e361f8d568c383f360064301 Jan 26 11:17:03 crc kubenswrapper[4619]: I0126 11:17:03.848666 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" Jan 26 11:17:04 crc kubenswrapper[4619]: W0126 11:17:04.317114 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29902d6d_b3b9_4bb8_a47f_cd055fb5fc9e.slice/crio-d9146804c9090094ef065031976c5e04ef0798c67e64662cd96b842702a822ee WatchSource:0}: Error finding container d9146804c9090094ef065031976c5e04ef0798c67e64662cd96b842702a822ee: Status 404 returned error can't find the container with id d9146804c9090094ef065031976c5e04ef0798c67e64662cd96b842702a822ee Jan 26 11:17:04 crc kubenswrapper[4619]: I0126 11:17:04.354644 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mdmqs"] Jan 26 11:17:04 crc kubenswrapper[4619]: I0126 11:17:04.506611 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f4190b4f-7c04-4c14-83b4-87e224fef035","Type":"ContainerStarted","Data":"46fef73b51bca33714e1927ef174e916174bb3d1e361f8d568c383f360064301"} Jan 26 11:17:04 crc kubenswrapper[4619]: I0126 11:17:04.513870 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" event={"ID":"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e","Type":"ContainerStarted","Data":"d9146804c9090094ef065031976c5e04ef0798c67e64662cd96b842702a822ee"} Jan 26 11:17:05 crc kubenswrapper[4619]: I0126 11:17:05.524060 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f4190b4f-7c04-4c14-83b4-87e224fef035","Type":"ContainerStarted","Data":"be4c06f8e0e401f5c5ccd31eab18c573f7e0ac161f785b0a8b497ac89340a1be"} Jan 26 11:17:05 crc kubenswrapper[4619]: I0126 11:17:05.525140 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85","Type":"ContainerStarted","Data":"36850e49f874a2e12ff15b2c809f044e51b3157b6d735e4ee242b67f8986570b"} Jan 26 11:17:05 crc kubenswrapper[4619]: I0126 11:17:05.527679 4619 generic.go:334] "Generic (PLEG): container finished" podID="29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e" containerID="c35eaf03c79609bc2c2a040041ac8aac6942eb47a284f73d80c8b580f8483d54" exitCode=0 Jan 26 11:17:05 crc kubenswrapper[4619]: I0126 11:17:05.527722 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" event={"ID":"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e","Type":"ContainerDied","Data":"c35eaf03c79609bc2c2a040041ac8aac6942eb47a284f73d80c8b580f8483d54"} Jan 26 11:17:06 crc kubenswrapper[4619]: I0126 11:17:06.549053 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" event={"ID":"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e","Type":"ContainerStarted","Data":"3e7c73a0f6cb5ef1270c4b8b965a5ce299fd0556e22b4fb79e6813989c42ddb4"} Jan 26 11:17:06 crc kubenswrapper[4619]: I0126 11:17:06.549737 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" Jan 26 11:17:06 crc kubenswrapper[4619]: I0126 11:17:06.589247 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" podStartSLOduration=3.58922288 podStartE2EDuration="3.58922288s" podCreationTimestamp="2026-01-26 11:17:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:17:06.576237171 +0000 UTC m=+1325.610277927" watchObservedRunningTime="2026-01-26 11:17:06.58922288 +0000 UTC m=+1325.623263636" Jan 26 11:17:13 crc kubenswrapper[4619]: I0126 11:17:13.850839 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" Jan 26 11:17:13 crc kubenswrapper[4619]: I0126 11:17:13.936283 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-rkvf5"] Jan 26 11:17:13 crc kubenswrapper[4619]: I0126 11:17:13.936813 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-rkvf5" podUID="88977f8b-7824-4631-b531-45c5baf76787" containerName="dnsmasq-dns" containerID="cri-o://c9fe315c56abd47ad944a21404f33af2e420dea871b8ddabb67f5fcf33467e27" gracePeriod=10 Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.155477 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cd9bffc9-fgwpp"] Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.157237 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cd9bffc9-fgwpp" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.185472 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cd9bffc9-fgwpp"] Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.210930 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e69603e-4c04-4273-8c8c-b71255c1f370-config\") pod \"dnsmasq-dns-6cd9bffc9-fgwpp\" (UID: \"1e69603e-4c04-4273-8c8c-b71255c1f370\") " pod="openstack/dnsmasq-dns-6cd9bffc9-fgwpp" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.210969 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e69603e-4c04-4273-8c8c-b71255c1f370-ovsdbserver-nb\") pod \"dnsmasq-dns-6cd9bffc9-fgwpp\" (UID: \"1e69603e-4c04-4273-8c8c-b71255c1f370\") " pod="openstack/dnsmasq-dns-6cd9bffc9-fgwpp" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.211028 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxj9k\" (UniqueName: \"kubernetes.io/projected/1e69603e-4c04-4273-8c8c-b71255c1f370-kube-api-access-xxj9k\") pod \"dnsmasq-dns-6cd9bffc9-fgwpp\" (UID: \"1e69603e-4c04-4273-8c8c-b71255c1f370\") " pod="openstack/dnsmasq-dns-6cd9bffc9-fgwpp" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.211060 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1e69603e-4c04-4273-8c8c-b71255c1f370-openstack-edpm-ipam\") pod \"dnsmasq-dns-6cd9bffc9-fgwpp\" (UID: \"1e69603e-4c04-4273-8c8c-b71255c1f370\") " pod="openstack/dnsmasq-dns-6cd9bffc9-fgwpp" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.211100 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e69603e-4c04-4273-8c8c-b71255c1f370-dns-svc\") pod \"dnsmasq-dns-6cd9bffc9-fgwpp\" (UID: \"1e69603e-4c04-4273-8c8c-b71255c1f370\") " pod="openstack/dnsmasq-dns-6cd9bffc9-fgwpp" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.211125 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e69603e-4c04-4273-8c8c-b71255c1f370-ovsdbserver-sb\") pod \"dnsmasq-dns-6cd9bffc9-fgwpp\" (UID: \"1e69603e-4c04-4273-8c8c-b71255c1f370\") " pod="openstack/dnsmasq-dns-6cd9bffc9-fgwpp" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.211163 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e69603e-4c04-4273-8c8c-b71255c1f370-dns-swift-storage-0\") pod \"dnsmasq-dns-6cd9bffc9-fgwpp\" (UID: \"1e69603e-4c04-4273-8c8c-b71255c1f370\") " pod="openstack/dnsmasq-dns-6cd9bffc9-fgwpp" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.234536 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.234772 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.322000 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e69603e-4c04-4273-8c8c-b71255c1f370-config\") pod \"dnsmasq-dns-6cd9bffc9-fgwpp\" (UID: \"1e69603e-4c04-4273-8c8c-b71255c1f370\") " pod="openstack/dnsmasq-dns-6cd9bffc9-fgwpp" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.322160 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e69603e-4c04-4273-8c8c-b71255c1f370-ovsdbserver-nb\") pod \"dnsmasq-dns-6cd9bffc9-fgwpp\" (UID: \"1e69603e-4c04-4273-8c8c-b71255c1f370\") " pod="openstack/dnsmasq-dns-6cd9bffc9-fgwpp" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.322399 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxj9k\" (UniqueName: \"kubernetes.io/projected/1e69603e-4c04-4273-8c8c-b71255c1f370-kube-api-access-xxj9k\") pod \"dnsmasq-dns-6cd9bffc9-fgwpp\" (UID: \"1e69603e-4c04-4273-8c8c-b71255c1f370\") " pod="openstack/dnsmasq-dns-6cd9bffc9-fgwpp" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.322502 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1e69603e-4c04-4273-8c8c-b71255c1f370-openstack-edpm-ipam\") pod \"dnsmasq-dns-6cd9bffc9-fgwpp\" (UID: \"1e69603e-4c04-4273-8c8c-b71255c1f370\") " pod="openstack/dnsmasq-dns-6cd9bffc9-fgwpp" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.322646 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e69603e-4c04-4273-8c8c-b71255c1f370-dns-svc\") pod \"dnsmasq-dns-6cd9bffc9-fgwpp\" (UID: \"1e69603e-4c04-4273-8c8c-b71255c1f370\") " pod="openstack/dnsmasq-dns-6cd9bffc9-fgwpp" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.322741 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e69603e-4c04-4273-8c8c-b71255c1f370-ovsdbserver-sb\") pod \"dnsmasq-dns-6cd9bffc9-fgwpp\" (UID: \"1e69603e-4c04-4273-8c8c-b71255c1f370\") " pod="openstack/dnsmasq-dns-6cd9bffc9-fgwpp" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.322873 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e69603e-4c04-4273-8c8c-b71255c1f370-dns-swift-storage-0\") pod \"dnsmasq-dns-6cd9bffc9-fgwpp\" (UID: \"1e69603e-4c04-4273-8c8c-b71255c1f370\") " pod="openstack/dnsmasq-dns-6cd9bffc9-fgwpp" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.323806 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e69603e-4c04-4273-8c8c-b71255c1f370-dns-swift-storage-0\") pod \"dnsmasq-dns-6cd9bffc9-fgwpp\" (UID: \"1e69603e-4c04-4273-8c8c-b71255c1f370\") " pod="openstack/dnsmasq-dns-6cd9bffc9-fgwpp" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.324393 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e69603e-4c04-4273-8c8c-b71255c1f370-config\") pod \"dnsmasq-dns-6cd9bffc9-fgwpp\" (UID: \"1e69603e-4c04-4273-8c8c-b71255c1f370\") " pod="openstack/dnsmasq-dns-6cd9bffc9-fgwpp" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.325255 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e69603e-4c04-4273-8c8c-b71255c1f370-ovsdbserver-nb\") pod \"dnsmasq-dns-6cd9bffc9-fgwpp\" (UID: \"1e69603e-4c04-4273-8c8c-b71255c1f370\") " pod="openstack/dnsmasq-dns-6cd9bffc9-fgwpp" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.327304 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1e69603e-4c04-4273-8c8c-b71255c1f370-openstack-edpm-ipam\") pod \"dnsmasq-dns-6cd9bffc9-fgwpp\" (UID: \"1e69603e-4c04-4273-8c8c-b71255c1f370\") " pod="openstack/dnsmasq-dns-6cd9bffc9-fgwpp" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.328490 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e69603e-4c04-4273-8c8c-b71255c1f370-dns-svc\") pod \"dnsmasq-dns-6cd9bffc9-fgwpp\" (UID: \"1e69603e-4c04-4273-8c8c-b71255c1f370\") " pod="openstack/dnsmasq-dns-6cd9bffc9-fgwpp" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.329040 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e69603e-4c04-4273-8c8c-b71255c1f370-ovsdbserver-sb\") pod \"dnsmasq-dns-6cd9bffc9-fgwpp\" (UID: \"1e69603e-4c04-4273-8c8c-b71255c1f370\") " pod="openstack/dnsmasq-dns-6cd9bffc9-fgwpp" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.388700 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxj9k\" (UniqueName: \"kubernetes.io/projected/1e69603e-4c04-4273-8c8c-b71255c1f370-kube-api-access-xxj9k\") pod \"dnsmasq-dns-6cd9bffc9-fgwpp\" (UID: \"1e69603e-4c04-4273-8c8c-b71255c1f370\") " pod="openstack/dnsmasq-dns-6cd9bffc9-fgwpp" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.475648 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cd9bffc9-fgwpp" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.619998 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-rkvf5" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.645509 4619 generic.go:334] "Generic (PLEG): container finished" podID="88977f8b-7824-4631-b531-45c5baf76787" containerID="c9fe315c56abd47ad944a21404f33af2e420dea871b8ddabb67f5fcf33467e27" exitCode=0 Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.645554 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-rkvf5" event={"ID":"88977f8b-7824-4631-b531-45c5baf76787","Type":"ContainerDied","Data":"c9fe315c56abd47ad944a21404f33af2e420dea871b8ddabb67f5fcf33467e27"} Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.645581 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-rkvf5" event={"ID":"88977f8b-7824-4631-b531-45c5baf76787","Type":"ContainerDied","Data":"64d768a2fb8b2ed88af78819c4b0117d36a5c597588d4ff05ae1f4b77945ec8a"} Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.645597 4619 scope.go:117] "RemoveContainer" containerID="c9fe315c56abd47ad944a21404f33af2e420dea871b8ddabb67f5fcf33467e27" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.645772 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-rkvf5" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.713850 4619 scope.go:117] "RemoveContainer" containerID="6e74af164c9fcfa79054b6642723a37eed2507e1b75b7f332161d6bd2baf999d" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.735494 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88977f8b-7824-4631-b531-45c5baf76787-dns-svc\") pod \"88977f8b-7824-4631-b531-45c5baf76787\" (UID: \"88977f8b-7824-4631-b531-45c5baf76787\") " Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.735609 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xncc5\" (UniqueName: \"kubernetes.io/projected/88977f8b-7824-4631-b531-45c5baf76787-kube-api-access-xncc5\") pod \"88977f8b-7824-4631-b531-45c5baf76787\" (UID: \"88977f8b-7824-4631-b531-45c5baf76787\") " Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.736950 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88977f8b-7824-4631-b531-45c5baf76787-ovsdbserver-sb\") pod \"88977f8b-7824-4631-b531-45c5baf76787\" (UID: \"88977f8b-7824-4631-b531-45c5baf76787\") " Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.736982 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/88977f8b-7824-4631-b531-45c5baf76787-dns-swift-storage-0\") pod \"88977f8b-7824-4631-b531-45c5baf76787\" (UID: \"88977f8b-7824-4631-b531-45c5baf76787\") " Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.737021 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88977f8b-7824-4631-b531-45c5baf76787-ovsdbserver-nb\") pod \"88977f8b-7824-4631-b531-45c5baf76787\" (UID: \"88977f8b-7824-4631-b531-45c5baf76787\") " Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.737087 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88977f8b-7824-4631-b531-45c5baf76787-config\") pod \"88977f8b-7824-4631-b531-45c5baf76787\" (UID: \"88977f8b-7824-4631-b531-45c5baf76787\") " Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.741725 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88977f8b-7824-4631-b531-45c5baf76787-kube-api-access-xncc5" (OuterVolumeSpecName: "kube-api-access-xncc5") pod "88977f8b-7824-4631-b531-45c5baf76787" (UID: "88977f8b-7824-4631-b531-45c5baf76787"). InnerVolumeSpecName "kube-api-access-xncc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.744308 4619 scope.go:117] "RemoveContainer" containerID="c9fe315c56abd47ad944a21404f33af2e420dea871b8ddabb67f5fcf33467e27" Jan 26 11:17:14 crc kubenswrapper[4619]: E0126 11:17:14.744763 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9fe315c56abd47ad944a21404f33af2e420dea871b8ddabb67f5fcf33467e27\": container with ID starting with c9fe315c56abd47ad944a21404f33af2e420dea871b8ddabb67f5fcf33467e27 not found: ID does not exist" containerID="c9fe315c56abd47ad944a21404f33af2e420dea871b8ddabb67f5fcf33467e27" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.744816 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9fe315c56abd47ad944a21404f33af2e420dea871b8ddabb67f5fcf33467e27"} err="failed to get container status \"c9fe315c56abd47ad944a21404f33af2e420dea871b8ddabb67f5fcf33467e27\": rpc error: code = NotFound desc = could not find container \"c9fe315c56abd47ad944a21404f33af2e420dea871b8ddabb67f5fcf33467e27\": container with ID starting with c9fe315c56abd47ad944a21404f33af2e420dea871b8ddabb67f5fcf33467e27 not found: ID does not exist" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.744848 4619 scope.go:117] "RemoveContainer" containerID="6e74af164c9fcfa79054b6642723a37eed2507e1b75b7f332161d6bd2baf999d" Jan 26 11:17:14 crc kubenswrapper[4619]: E0126 11:17:14.745225 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e74af164c9fcfa79054b6642723a37eed2507e1b75b7f332161d6bd2baf999d\": container with ID starting with 6e74af164c9fcfa79054b6642723a37eed2507e1b75b7f332161d6bd2baf999d not found: ID does not exist" containerID="6e74af164c9fcfa79054b6642723a37eed2507e1b75b7f332161d6bd2baf999d" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.745253 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e74af164c9fcfa79054b6642723a37eed2507e1b75b7f332161d6bd2baf999d"} err="failed to get container status \"6e74af164c9fcfa79054b6642723a37eed2507e1b75b7f332161d6bd2baf999d\": rpc error: code = NotFound desc = could not find container \"6e74af164c9fcfa79054b6642723a37eed2507e1b75b7f332161d6bd2baf999d\": container with ID starting with 6e74af164c9fcfa79054b6642723a37eed2507e1b75b7f332161d6bd2baf999d not found: ID does not exist" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.814655 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88977f8b-7824-4631-b531-45c5baf76787-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "88977f8b-7824-4631-b531-45c5baf76787" (UID: "88977f8b-7824-4631-b531-45c5baf76787"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.819354 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88977f8b-7824-4631-b531-45c5baf76787-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "88977f8b-7824-4631-b531-45c5baf76787" (UID: "88977f8b-7824-4631-b531-45c5baf76787"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.819671 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88977f8b-7824-4631-b531-45c5baf76787-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "88977f8b-7824-4631-b531-45c5baf76787" (UID: "88977f8b-7824-4631-b531-45c5baf76787"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.837734 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88977f8b-7824-4631-b531-45c5baf76787-config" (OuterVolumeSpecName: "config") pod "88977f8b-7824-4631-b531-45c5baf76787" (UID: "88977f8b-7824-4631-b531-45c5baf76787"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.839143 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xncc5\" (UniqueName: \"kubernetes.io/projected/88977f8b-7824-4631-b531-45c5baf76787-kube-api-access-xncc5\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.839168 4619 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88977f8b-7824-4631-b531-45c5baf76787-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.839177 4619 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/88977f8b-7824-4631-b531-45c5baf76787-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.839187 4619 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88977f8b-7824-4631-b531-45c5baf76787-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.839196 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88977f8b-7824-4631-b531-45c5baf76787-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.849173 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88977f8b-7824-4631-b531-45c5baf76787-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "88977f8b-7824-4631-b531-45c5baf76787" (UID: "88977f8b-7824-4631-b531-45c5baf76787"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.941252 4619 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88977f8b-7824-4631-b531-45c5baf76787-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.981790 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-rkvf5"] Jan 26 11:17:14 crc kubenswrapper[4619]: I0126 11:17:14.993000 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-rkvf5"] Jan 26 11:17:15 crc kubenswrapper[4619]: I0126 11:17:15.085948 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cd9bffc9-fgwpp"] Jan 26 11:17:15 crc kubenswrapper[4619]: I0126 11:17:15.275688 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88977f8b-7824-4631-b531-45c5baf76787" path="/var/lib/kubelet/pods/88977f8b-7824-4631-b531-45c5baf76787/volumes" Jan 26 11:17:15 crc kubenswrapper[4619]: I0126 11:17:15.655305 4619 generic.go:334] "Generic (PLEG): container finished" podID="1e69603e-4c04-4273-8c8c-b71255c1f370" containerID="d94a4956b9bbe36660088abaaf1a4a0398d19d62580b21678863206f881f4551" exitCode=0 Jan 26 11:17:15 crc kubenswrapper[4619]: I0126 11:17:15.655344 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cd9bffc9-fgwpp" event={"ID":"1e69603e-4c04-4273-8c8c-b71255c1f370","Type":"ContainerDied","Data":"d94a4956b9bbe36660088abaaf1a4a0398d19d62580b21678863206f881f4551"} Jan 26 11:17:15 crc kubenswrapper[4619]: I0126 11:17:15.655365 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cd9bffc9-fgwpp" event={"ID":"1e69603e-4c04-4273-8c8c-b71255c1f370","Type":"ContainerStarted","Data":"60e8bbd856f37a3bbe681ac9714b4eb534aaa69367238aa3c8fa0eca36ffd7b9"} Jan 26 11:17:15 crc kubenswrapper[4619]: I0126 11:17:15.698874 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 11:17:16 crc kubenswrapper[4619]: I0126 11:17:16.669047 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cd9bffc9-fgwpp" event={"ID":"1e69603e-4c04-4273-8c8c-b71255c1f370","Type":"ContainerStarted","Data":"36896dd672ea63d9e886b41e3d08cb88f4cbd0c358e3e2d287ec4558d99f8fc6"} Jan 26 11:17:16 crc kubenswrapper[4619]: I0126 11:17:16.669590 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cd9bffc9-fgwpp" Jan 26 11:17:16 crc kubenswrapper[4619]: I0126 11:17:16.704827 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cd9bffc9-fgwpp" podStartSLOduration=2.704807252 podStartE2EDuration="2.704807252s" podCreationTimestamp="2026-01-26 11:17:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:17:16.69122947 +0000 UTC m=+1335.725270186" watchObservedRunningTime="2026-01-26 11:17:16.704807252 +0000 UTC m=+1335.738847988" Jan 26 11:17:24 crc kubenswrapper[4619]: I0126 11:17:24.477902 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cd9bffc9-fgwpp" Jan 26 11:17:24 crc kubenswrapper[4619]: I0126 11:17:24.547161 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mdmqs"] Jan 26 11:17:24 crc kubenswrapper[4619]: I0126 11:17:24.547380 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" podUID="29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e" containerName="dnsmasq-dns" containerID="cri-o://3e7c73a0f6cb5ef1270c4b8b965a5ce299fd0556e22b4fb79e6813989c42ddb4" gracePeriod=10 Jan 26 11:17:24 crc kubenswrapper[4619]: I0126 11:17:24.754406 4619 generic.go:334] "Generic (PLEG): container finished" podID="29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e" containerID="3e7c73a0f6cb5ef1270c4b8b965a5ce299fd0556e22b4fb79e6813989c42ddb4" exitCode=0 Jan 26 11:17:24 crc kubenswrapper[4619]: I0126 11:17:24.754438 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" event={"ID":"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e","Type":"ContainerDied","Data":"3e7c73a0f6cb5ef1270c4b8b965a5ce299fd0556e22b4fb79e6813989c42ddb4"} Jan 26 11:17:24 crc kubenswrapper[4619]: I0126 11:17:24.990099 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" Jan 26 11:17:25 crc kubenswrapper[4619]: I0126 11:17:25.037239 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-ovsdbserver-sb\") pod \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\" (UID: \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\") " Jan 26 11:17:25 crc kubenswrapper[4619]: I0126 11:17:25.037279 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-ovsdbserver-nb\") pod \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\" (UID: \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\") " Jan 26 11:17:25 crc kubenswrapper[4619]: I0126 11:17:25.037353 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-dns-swift-storage-0\") pod \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\" (UID: \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\") " Jan 26 11:17:25 crc kubenswrapper[4619]: I0126 11:17:25.037374 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7prnd\" (UniqueName: \"kubernetes.io/projected/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-kube-api-access-7prnd\") pod \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\" (UID: \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\") " Jan 26 11:17:25 crc kubenswrapper[4619]: I0126 11:17:25.037494 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-dns-svc\") pod \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\" (UID: \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\") " Jan 26 11:17:25 crc kubenswrapper[4619]: I0126 11:17:25.037525 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-config\") pod \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\" (UID: \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\") " Jan 26 11:17:25 crc kubenswrapper[4619]: I0126 11:17:25.037602 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-openstack-edpm-ipam\") pod \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\" (UID: \"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e\") " Jan 26 11:17:25 crc kubenswrapper[4619]: I0126 11:17:25.051134 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-kube-api-access-7prnd" (OuterVolumeSpecName: "kube-api-access-7prnd") pod "29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e" (UID: "29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e"). InnerVolumeSpecName "kube-api-access-7prnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:25 crc kubenswrapper[4619]: I0126 11:17:25.100496 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e" (UID: "29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:25 crc kubenswrapper[4619]: I0126 11:17:25.120885 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-config" (OuterVolumeSpecName: "config") pod "29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e" (UID: "29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:25 crc kubenswrapper[4619]: I0126 11:17:25.140733 4619 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:25 crc kubenswrapper[4619]: I0126 11:17:25.140755 4619 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:25 crc kubenswrapper[4619]: I0126 11:17:25.140763 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7prnd\" (UniqueName: \"kubernetes.io/projected/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-kube-api-access-7prnd\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:25 crc kubenswrapper[4619]: I0126 11:17:25.152788 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e" (UID: "29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:25 crc kubenswrapper[4619]: I0126 11:17:25.175036 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e" (UID: "29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:25 crc kubenswrapper[4619]: I0126 11:17:25.217004 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e" (UID: "29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:25 crc kubenswrapper[4619]: I0126 11:17:25.224925 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e" (UID: "29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:25 crc kubenswrapper[4619]: I0126 11:17:25.242434 4619 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:25 crc kubenswrapper[4619]: I0126 11:17:25.242732 4619 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:25 crc kubenswrapper[4619]: I0126 11:17:25.242832 4619 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:25 crc kubenswrapper[4619]: I0126 11:17:25.242901 4619 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:25 crc kubenswrapper[4619]: I0126 11:17:25.763676 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" event={"ID":"29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e","Type":"ContainerDied","Data":"d9146804c9090094ef065031976c5e04ef0798c67e64662cd96b842702a822ee"} Jan 26 11:17:25 crc kubenswrapper[4619]: I0126 11:17:25.764719 4619 scope.go:117] "RemoveContainer" containerID="3e7c73a0f6cb5ef1270c4b8b965a5ce299fd0556e22b4fb79e6813989c42ddb4" Jan 26 11:17:25 crc kubenswrapper[4619]: I0126 11:17:25.764840 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-mdmqs" Jan 26 11:17:25 crc kubenswrapper[4619]: I0126 11:17:25.790359 4619 scope.go:117] "RemoveContainer" containerID="c35eaf03c79609bc2c2a040041ac8aac6942eb47a284f73d80c8b580f8483d54" Jan 26 11:17:25 crc kubenswrapper[4619]: I0126 11:17:25.790553 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mdmqs"] Jan 26 11:17:25 crc kubenswrapper[4619]: I0126 11:17:25.800564 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mdmqs"] Jan 26 11:17:27 crc kubenswrapper[4619]: I0126 11:17:27.288855 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e" path="/var/lib/kubelet/pods/29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e/volumes" Jan 26 11:17:36 crc kubenswrapper[4619]: I0126 11:17:36.927362 4619 generic.go:334] "Generic (PLEG): container finished" podID="4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85" containerID="36850e49f874a2e12ff15b2c809f044e51b3157b6d735e4ee242b67f8986570b" exitCode=0 Jan 26 11:17:36 crc kubenswrapper[4619]: I0126 11:17:36.927757 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85","Type":"ContainerDied","Data":"36850e49f874a2e12ff15b2c809f044e51b3157b6d735e4ee242b67f8986570b"} Jan 26 11:17:37 crc kubenswrapper[4619]: I0126 11:17:37.948028 4619 generic.go:334] "Generic (PLEG): container finished" podID="f4190b4f-7c04-4c14-83b4-87e224fef035" containerID="be4c06f8e0e401f5c5ccd31eab18c573f7e0ac161f785b0a8b497ac89340a1be" exitCode=0 Jan 26 11:17:37 crc kubenswrapper[4619]: I0126 11:17:37.948118 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f4190b4f-7c04-4c14-83b4-87e224fef035","Type":"ContainerDied","Data":"be4c06f8e0e401f5c5ccd31eab18c573f7e0ac161f785b0a8b497ac89340a1be"} Jan 26 11:17:37 crc kubenswrapper[4619]: I0126 11:17:37.951478 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85","Type":"ContainerStarted","Data":"c3bc03bb0c8cb07d9e4fb3d7b84ad9711c36f63b0cc9a77392df2c518803630a"} Jan 26 11:17:37 crc kubenswrapper[4619]: I0126 11:17:37.952392 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 26 11:17:38 crc kubenswrapper[4619]: I0126 11:17:38.034492 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.034474675 podStartE2EDuration="37.034474675s" podCreationTimestamp="2026-01-26 11:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:17:38.025330271 +0000 UTC m=+1357.059370987" watchObservedRunningTime="2026-01-26 11:17:38.034474675 +0000 UTC m=+1357.068515391" Jan 26 11:17:38 crc kubenswrapper[4619]: I0126 11:17:38.960475 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f4190b4f-7c04-4c14-83b4-87e224fef035","Type":"ContainerStarted","Data":"2f4c8485f87be5b482ba627192f0dd7a1e8f7068abee029a1564dd81c3078cac"} Jan 26 11:17:38 crc kubenswrapper[4619]: I0126 11:17:38.961089 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:17:39 crc kubenswrapper[4619]: I0126 11:17:39.006589 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.00657081 podStartE2EDuration="37.00657081s" podCreationTimestamp="2026-01-26 11:17:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:17:38.99648779 +0000 UTC m=+1358.030528506" watchObservedRunningTime="2026-01-26 11:17:39.00657081 +0000 UTC m=+1358.040611526" Jan 26 11:17:44 crc kubenswrapper[4619]: I0126 11:17:44.234326 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:17:44 crc kubenswrapper[4619]: I0126 11:17:44.234962 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:17:44 crc kubenswrapper[4619]: I0126 11:17:44.235015 4619 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" Jan 26 11:17:44 crc kubenswrapper[4619]: I0126 11:17:44.235703 4619 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a97c96f04b0e6279913de2f2c440a695c44ec0545531f13754f0d90f3a1c8d9f"} pod="openshift-machine-config-operator/machine-config-daemon-28hd4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 11:17:44 crc kubenswrapper[4619]: I0126 11:17:44.235762 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" containerID="cri-o://a97c96f04b0e6279913de2f2c440a695c44ec0545531f13754f0d90f3a1c8d9f" gracePeriod=600 Jan 26 11:17:45 crc kubenswrapper[4619]: I0126 11:17:45.008009 4619 generic.go:334] "Generic (PLEG): container finished" podID="f33a41bb-6406-4c73-8024-4acd72817832" containerID="a97c96f04b0e6279913de2f2c440a695c44ec0545531f13754f0d90f3a1c8d9f" exitCode=0 Jan 26 11:17:45 crc kubenswrapper[4619]: I0126 11:17:45.008816 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerDied","Data":"a97c96f04b0e6279913de2f2c440a695c44ec0545531f13754f0d90f3a1c8d9f"} Jan 26 11:17:45 crc kubenswrapper[4619]: I0126 11:17:45.008902 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerStarted","Data":"3654ed6a2adbc8c9b03f469d4fac0d668f99b333c07f0e11f135d0c00798b1fe"} Jan 26 11:17:45 crc kubenswrapper[4619]: I0126 11:17:45.008971 4619 scope.go:117] "RemoveContainer" containerID="acb7965272930c0e5aeb32299fd66f4070cac2661e0eb68cc61aedd3e0ea08f9" Jan 26 11:17:45 crc kubenswrapper[4619]: I0126 11:17:45.009239 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr"] Jan 26 11:17:45 crc kubenswrapper[4619]: E0126 11:17:45.009642 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e" containerName="init" Jan 26 11:17:45 crc kubenswrapper[4619]: I0126 11:17:45.009658 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e" containerName="init" Jan 26 11:17:45 crc kubenswrapper[4619]: E0126 11:17:45.009668 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88977f8b-7824-4631-b531-45c5baf76787" containerName="init" Jan 26 11:17:45 crc kubenswrapper[4619]: I0126 11:17:45.009674 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="88977f8b-7824-4631-b531-45c5baf76787" containerName="init" Jan 26 11:17:45 crc kubenswrapper[4619]: E0126 11:17:45.009705 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88977f8b-7824-4631-b531-45c5baf76787" containerName="dnsmasq-dns" Jan 26 11:17:45 crc kubenswrapper[4619]: I0126 11:17:45.009712 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="88977f8b-7824-4631-b531-45c5baf76787" containerName="dnsmasq-dns" Jan 26 11:17:45 crc kubenswrapper[4619]: E0126 11:17:45.009722 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e" containerName="dnsmasq-dns" Jan 26 11:17:45 crc kubenswrapper[4619]: I0126 11:17:45.009728 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e" containerName="dnsmasq-dns" Jan 26 11:17:45 crc kubenswrapper[4619]: I0126 11:17:45.009893 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="88977f8b-7824-4631-b531-45c5baf76787" containerName="dnsmasq-dns" Jan 26 11:17:45 crc kubenswrapper[4619]: I0126 11:17:45.009914 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="29902d6d-b3b9-4bb8-a47f-cd055fb5fc9e" containerName="dnsmasq-dns" Jan 26 11:17:45 crc kubenswrapper[4619]: I0126 11:17:45.010476 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr" Jan 26 11:17:45 crc kubenswrapper[4619]: I0126 11:17:45.014027 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 11:17:45 crc kubenswrapper[4619]: I0126 11:17:45.014263 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 11:17:45 crc kubenswrapper[4619]: I0126 11:17:45.014367 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fn84q" Jan 26 11:17:45 crc kubenswrapper[4619]: I0126 11:17:45.014432 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 11:17:45 crc kubenswrapper[4619]: I0126 11:17:45.037745 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr"] Jan 26 11:17:45 crc kubenswrapper[4619]: I0126 11:17:45.126176 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af65cd01-ac28-4699-ae97-2fd8546a9925-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr\" (UID: \"af65cd01-ac28-4699-ae97-2fd8546a9925\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr" Jan 26 11:17:45 crc kubenswrapper[4619]: I0126 11:17:45.126281 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af65cd01-ac28-4699-ae97-2fd8546a9925-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr\" (UID: \"af65cd01-ac28-4699-ae97-2fd8546a9925\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr" Jan 26 11:17:45 crc kubenswrapper[4619]: I0126 11:17:45.126305 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af65cd01-ac28-4699-ae97-2fd8546a9925-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr\" (UID: \"af65cd01-ac28-4699-ae97-2fd8546a9925\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr" Jan 26 11:17:45 crc kubenswrapper[4619]: I0126 11:17:45.126342 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f2rn\" (UniqueName: \"kubernetes.io/projected/af65cd01-ac28-4699-ae97-2fd8546a9925-kube-api-access-2f2rn\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr\" (UID: \"af65cd01-ac28-4699-ae97-2fd8546a9925\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr" Jan 26 11:17:45 crc kubenswrapper[4619]: I0126 11:17:45.227809 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af65cd01-ac28-4699-ae97-2fd8546a9925-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr\" (UID: \"af65cd01-ac28-4699-ae97-2fd8546a9925\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr" Jan 26 11:17:45 crc kubenswrapper[4619]: I0126 11:17:45.227859 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af65cd01-ac28-4699-ae97-2fd8546a9925-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr\" (UID: \"af65cd01-ac28-4699-ae97-2fd8546a9925\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr" Jan 26 11:17:45 crc kubenswrapper[4619]: I0126 11:17:45.227901 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f2rn\" (UniqueName: \"kubernetes.io/projected/af65cd01-ac28-4699-ae97-2fd8546a9925-kube-api-access-2f2rn\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr\" (UID: \"af65cd01-ac28-4699-ae97-2fd8546a9925\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr" Jan 26 11:17:45 crc kubenswrapper[4619]: I0126 11:17:45.227977 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af65cd01-ac28-4699-ae97-2fd8546a9925-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr\" (UID: \"af65cd01-ac28-4699-ae97-2fd8546a9925\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr" Jan 26 11:17:45 crc kubenswrapper[4619]: I0126 11:17:45.239406 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af65cd01-ac28-4699-ae97-2fd8546a9925-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr\" (UID: \"af65cd01-ac28-4699-ae97-2fd8546a9925\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr" Jan 26 11:17:45 crc kubenswrapper[4619]: I0126 11:17:45.239975 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af65cd01-ac28-4699-ae97-2fd8546a9925-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr\" (UID: \"af65cd01-ac28-4699-ae97-2fd8546a9925\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr" Jan 26 11:17:45 crc kubenswrapper[4619]: I0126 11:17:45.243741 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af65cd01-ac28-4699-ae97-2fd8546a9925-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr\" (UID: \"af65cd01-ac28-4699-ae97-2fd8546a9925\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr" Jan 26 11:17:45 crc kubenswrapper[4619]: I0126 11:17:45.244249 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f2rn\" (UniqueName: \"kubernetes.io/projected/af65cd01-ac28-4699-ae97-2fd8546a9925-kube-api-access-2f2rn\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr\" (UID: \"af65cd01-ac28-4699-ae97-2fd8546a9925\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr" Jan 26 11:17:45 crc kubenswrapper[4619]: I0126 11:17:45.327333 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr" Jan 26 11:17:46 crc kubenswrapper[4619]: I0126 11:17:46.033743 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr"] Jan 26 11:17:46 crc kubenswrapper[4619]: W0126 11:17:46.043294 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf65cd01_ac28_4699_ae97_2fd8546a9925.slice/crio-5b6c9db75c8c2dd5f66f9b17c686eb0993144b7e6613bf92d0f7c9b4cfe1504a WatchSource:0}: Error finding container 5b6c9db75c8c2dd5f66f9b17c686eb0993144b7e6613bf92d0f7c9b4cfe1504a: Status 404 returned error can't find the container with id 5b6c9db75c8c2dd5f66f9b17c686eb0993144b7e6613bf92d0f7c9b4cfe1504a Jan 26 11:17:47 crc kubenswrapper[4619]: I0126 11:17:47.029634 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr" event={"ID":"af65cd01-ac28-4699-ae97-2fd8546a9925","Type":"ContainerStarted","Data":"5b6c9db75c8c2dd5f66f9b17c686eb0993144b7e6613bf92d0f7c9b4cfe1504a"} Jan 26 11:17:52 crc kubenswrapper[4619]: I0126 11:17:52.424796 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 26 11:17:53 crc kubenswrapper[4619]: I0126 11:17:53.220783 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:18:00 crc kubenswrapper[4619]: I0126 11:18:00.170932 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr" event={"ID":"af65cd01-ac28-4699-ae97-2fd8546a9925","Type":"ContainerStarted","Data":"14fd6250e2034926ad1f3b33931a7337490a34981c372693b8efeb8bcf7090e0"} Jan 26 11:18:00 crc kubenswrapper[4619]: I0126 11:18:00.189780 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr" podStartSLOduration=2.763818831 podStartE2EDuration="16.189763863s" podCreationTimestamp="2026-01-26 11:17:44 +0000 UTC" firstStartedPulling="2026-01-26 11:17:46.045935411 +0000 UTC m=+1365.079976127" lastFinishedPulling="2026-01-26 11:17:59.471880443 +0000 UTC m=+1378.505921159" observedRunningTime="2026-01-26 11:18:00.186187495 +0000 UTC m=+1379.220228211" watchObservedRunningTime="2026-01-26 11:18:00.189763863 +0000 UTC m=+1379.223804579" Jan 26 11:18:18 crc kubenswrapper[4619]: I0126 11:18:18.357392 4619 generic.go:334] "Generic (PLEG): container finished" podID="af65cd01-ac28-4699-ae97-2fd8546a9925" containerID="14fd6250e2034926ad1f3b33931a7337490a34981c372693b8efeb8bcf7090e0" exitCode=0 Jan 26 11:18:18 crc kubenswrapper[4619]: I0126 11:18:18.357473 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr" event={"ID":"af65cd01-ac28-4699-ae97-2fd8546a9925","Type":"ContainerDied","Data":"14fd6250e2034926ad1f3b33931a7337490a34981c372693b8efeb8bcf7090e0"} Jan 26 11:18:19 crc kubenswrapper[4619]: I0126 11:18:19.828375 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr" Jan 26 11:18:19 crc kubenswrapper[4619]: I0126 11:18:19.914483 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af65cd01-ac28-4699-ae97-2fd8546a9925-inventory\") pod \"af65cd01-ac28-4699-ae97-2fd8546a9925\" (UID: \"af65cd01-ac28-4699-ae97-2fd8546a9925\") " Jan 26 11:18:19 crc kubenswrapper[4619]: I0126 11:18:19.914634 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f2rn\" (UniqueName: \"kubernetes.io/projected/af65cd01-ac28-4699-ae97-2fd8546a9925-kube-api-access-2f2rn\") pod \"af65cd01-ac28-4699-ae97-2fd8546a9925\" (UID: \"af65cd01-ac28-4699-ae97-2fd8546a9925\") " Jan 26 11:18:19 crc kubenswrapper[4619]: I0126 11:18:19.914677 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af65cd01-ac28-4699-ae97-2fd8546a9925-ssh-key-openstack-edpm-ipam\") pod \"af65cd01-ac28-4699-ae97-2fd8546a9925\" (UID: \"af65cd01-ac28-4699-ae97-2fd8546a9925\") " Jan 26 11:18:19 crc kubenswrapper[4619]: I0126 11:18:19.914787 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af65cd01-ac28-4699-ae97-2fd8546a9925-repo-setup-combined-ca-bundle\") pod \"af65cd01-ac28-4699-ae97-2fd8546a9925\" (UID: \"af65cd01-ac28-4699-ae97-2fd8546a9925\") " Jan 26 11:18:19 crc kubenswrapper[4619]: I0126 11:18:19.923294 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af65cd01-ac28-4699-ae97-2fd8546a9925-kube-api-access-2f2rn" (OuterVolumeSpecName: "kube-api-access-2f2rn") pod "af65cd01-ac28-4699-ae97-2fd8546a9925" (UID: "af65cd01-ac28-4699-ae97-2fd8546a9925"). InnerVolumeSpecName "kube-api-access-2f2rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:18:19 crc kubenswrapper[4619]: I0126 11:18:19.940241 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af65cd01-ac28-4699-ae97-2fd8546a9925-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "af65cd01-ac28-4699-ae97-2fd8546a9925" (UID: "af65cd01-ac28-4699-ae97-2fd8546a9925"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:18:19 crc kubenswrapper[4619]: I0126 11:18:19.957821 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af65cd01-ac28-4699-ae97-2fd8546a9925-inventory" (OuterVolumeSpecName: "inventory") pod "af65cd01-ac28-4699-ae97-2fd8546a9925" (UID: "af65cd01-ac28-4699-ae97-2fd8546a9925"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:18:19 crc kubenswrapper[4619]: I0126 11:18:19.959932 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af65cd01-ac28-4699-ae97-2fd8546a9925-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "af65cd01-ac28-4699-ae97-2fd8546a9925" (UID: "af65cd01-ac28-4699-ae97-2fd8546a9925"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:18:20 crc kubenswrapper[4619]: I0126 11:18:20.016996 4619 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af65cd01-ac28-4699-ae97-2fd8546a9925-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 11:18:20 crc kubenswrapper[4619]: I0126 11:18:20.017033 4619 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af65cd01-ac28-4699-ae97-2fd8546a9925-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:18:20 crc kubenswrapper[4619]: I0126 11:18:20.017048 4619 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af65cd01-ac28-4699-ae97-2fd8546a9925-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 11:18:20 crc kubenswrapper[4619]: I0126 11:18:20.017060 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f2rn\" (UniqueName: \"kubernetes.io/projected/af65cd01-ac28-4699-ae97-2fd8546a9925-kube-api-access-2f2rn\") on node \"crc\" DevicePath \"\"" Jan 26 11:18:20 crc kubenswrapper[4619]: I0126 11:18:20.379555 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr" event={"ID":"af65cd01-ac28-4699-ae97-2fd8546a9925","Type":"ContainerDied","Data":"5b6c9db75c8c2dd5f66f9b17c686eb0993144b7e6613bf92d0f7c9b4cfe1504a"} Jan 26 11:18:20 crc kubenswrapper[4619]: I0126 11:18:20.379625 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr" Jan 26 11:18:20 crc kubenswrapper[4619]: I0126 11:18:20.379612 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b6c9db75c8c2dd5f66f9b17c686eb0993144b7e6613bf92d0f7c9b4cfe1504a" Jan 26 11:18:20 crc kubenswrapper[4619]: I0126 11:18:20.481498 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-wjld2"] Jan 26 11:18:20 crc kubenswrapper[4619]: E0126 11:18:20.481954 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af65cd01-ac28-4699-ae97-2fd8546a9925" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 11:18:20 crc kubenswrapper[4619]: I0126 11:18:20.481977 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="af65cd01-ac28-4699-ae97-2fd8546a9925" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 11:18:20 crc kubenswrapper[4619]: I0126 11:18:20.482254 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="af65cd01-ac28-4699-ae97-2fd8546a9925" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 11:18:20 crc kubenswrapper[4619]: I0126 11:18:20.482967 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wjld2" Jan 26 11:18:20 crc kubenswrapper[4619]: I0126 11:18:20.485697 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 11:18:20 crc kubenswrapper[4619]: I0126 11:18:20.485932 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 11:18:20 crc kubenswrapper[4619]: I0126 11:18:20.486117 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 11:18:20 crc kubenswrapper[4619]: I0126 11:18:20.489292 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fn84q" Jan 26 11:18:20 crc kubenswrapper[4619]: I0126 11:18:20.497423 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-wjld2"] Jan 26 11:18:20 crc kubenswrapper[4619]: I0126 11:18:20.626730 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99b4b151-e965-4c8b-9a4b-22b680ea1d69-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wjld2\" (UID: \"99b4b151-e965-4c8b-9a4b-22b680ea1d69\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wjld2" Jan 26 11:18:20 crc kubenswrapper[4619]: I0126 11:18:20.626781 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/99b4b151-e965-4c8b-9a4b-22b680ea1d69-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wjld2\" (UID: \"99b4b151-e965-4c8b-9a4b-22b680ea1d69\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wjld2" Jan 26 11:18:20 crc kubenswrapper[4619]: I0126 11:18:20.626843 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xprj\" (UniqueName: \"kubernetes.io/projected/99b4b151-e965-4c8b-9a4b-22b680ea1d69-kube-api-access-8xprj\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wjld2\" (UID: \"99b4b151-e965-4c8b-9a4b-22b680ea1d69\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wjld2" Jan 26 11:18:20 crc kubenswrapper[4619]: I0126 11:18:20.729331 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99b4b151-e965-4c8b-9a4b-22b680ea1d69-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wjld2\" (UID: \"99b4b151-e965-4c8b-9a4b-22b680ea1d69\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wjld2" Jan 26 11:18:20 crc kubenswrapper[4619]: I0126 11:18:20.729862 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/99b4b151-e965-4c8b-9a4b-22b680ea1d69-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wjld2\" (UID: \"99b4b151-e965-4c8b-9a4b-22b680ea1d69\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wjld2" Jan 26 11:18:20 crc kubenswrapper[4619]: I0126 11:18:20.730051 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xprj\" (UniqueName: \"kubernetes.io/projected/99b4b151-e965-4c8b-9a4b-22b680ea1d69-kube-api-access-8xprj\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wjld2\" (UID: \"99b4b151-e965-4c8b-9a4b-22b680ea1d69\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wjld2" Jan 26 11:18:20 crc kubenswrapper[4619]: I0126 11:18:20.738209 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/99b4b151-e965-4c8b-9a4b-22b680ea1d69-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wjld2\" (UID: \"99b4b151-e965-4c8b-9a4b-22b680ea1d69\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wjld2" Jan 26 11:18:20 crc kubenswrapper[4619]: I0126 11:18:20.738624 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99b4b151-e965-4c8b-9a4b-22b680ea1d69-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wjld2\" (UID: \"99b4b151-e965-4c8b-9a4b-22b680ea1d69\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wjld2" Jan 26 11:18:20 crc kubenswrapper[4619]: I0126 11:18:20.756262 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xprj\" (UniqueName: \"kubernetes.io/projected/99b4b151-e965-4c8b-9a4b-22b680ea1d69-kube-api-access-8xprj\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wjld2\" (UID: \"99b4b151-e965-4c8b-9a4b-22b680ea1d69\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wjld2" Jan 26 11:18:20 crc kubenswrapper[4619]: I0126 11:18:20.830720 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wjld2" Jan 26 11:18:21 crc kubenswrapper[4619]: I0126 11:18:21.402666 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-wjld2"] Jan 26 11:18:22 crc kubenswrapper[4619]: I0126 11:18:22.410215 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wjld2" event={"ID":"99b4b151-e965-4c8b-9a4b-22b680ea1d69","Type":"ContainerStarted","Data":"d600cc7155c0f4f46d75d2a96ec56ad529600ef1e667ae879811f086e57bd3cf"} Jan 26 11:18:22 crc kubenswrapper[4619]: I0126 11:18:22.410586 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wjld2" event={"ID":"99b4b151-e965-4c8b-9a4b-22b680ea1d69","Type":"ContainerStarted","Data":"3fa2487c2cbb00239fb5bab97e6c30a0010c6d1a5f59658b3380bec7b856a9b4"} Jan 26 11:18:22 crc kubenswrapper[4619]: I0126 11:18:22.441974 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wjld2" podStartSLOduration=1.972357245 podStartE2EDuration="2.441950284s" podCreationTimestamp="2026-01-26 11:18:20 +0000 UTC" firstStartedPulling="2026-01-26 11:18:21.395175589 +0000 UTC m=+1400.429216305" lastFinishedPulling="2026-01-26 11:18:21.864768628 +0000 UTC m=+1400.898809344" observedRunningTime="2026-01-26 11:18:22.437502041 +0000 UTC m=+1401.471542767" watchObservedRunningTime="2026-01-26 11:18:22.441950284 +0000 UTC m=+1401.475991010" Jan 26 11:18:25 crc kubenswrapper[4619]: I0126 11:18:25.436849 4619 generic.go:334] "Generic (PLEG): container finished" podID="99b4b151-e965-4c8b-9a4b-22b680ea1d69" containerID="d600cc7155c0f4f46d75d2a96ec56ad529600ef1e667ae879811f086e57bd3cf" exitCode=0 Jan 26 11:18:25 crc kubenswrapper[4619]: I0126 11:18:25.437233 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wjld2" event={"ID":"99b4b151-e965-4c8b-9a4b-22b680ea1d69","Type":"ContainerDied","Data":"d600cc7155c0f4f46d75d2a96ec56ad529600ef1e667ae879811f086e57bd3cf"} Jan 26 11:18:26 crc kubenswrapper[4619]: I0126 11:18:26.877562 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wjld2" Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.078202 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/99b4b151-e965-4c8b-9a4b-22b680ea1d69-ssh-key-openstack-edpm-ipam\") pod \"99b4b151-e965-4c8b-9a4b-22b680ea1d69\" (UID: \"99b4b151-e965-4c8b-9a4b-22b680ea1d69\") " Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.078351 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xprj\" (UniqueName: \"kubernetes.io/projected/99b4b151-e965-4c8b-9a4b-22b680ea1d69-kube-api-access-8xprj\") pod \"99b4b151-e965-4c8b-9a4b-22b680ea1d69\" (UID: \"99b4b151-e965-4c8b-9a4b-22b680ea1d69\") " Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.078510 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99b4b151-e965-4c8b-9a4b-22b680ea1d69-inventory\") pod \"99b4b151-e965-4c8b-9a4b-22b680ea1d69\" (UID: \"99b4b151-e965-4c8b-9a4b-22b680ea1d69\") " Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.083321 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99b4b151-e965-4c8b-9a4b-22b680ea1d69-kube-api-access-8xprj" (OuterVolumeSpecName: "kube-api-access-8xprj") pod "99b4b151-e965-4c8b-9a4b-22b680ea1d69" (UID: "99b4b151-e965-4c8b-9a4b-22b680ea1d69"). InnerVolumeSpecName "kube-api-access-8xprj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.103059 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99b4b151-e965-4c8b-9a4b-22b680ea1d69-inventory" (OuterVolumeSpecName: "inventory") pod "99b4b151-e965-4c8b-9a4b-22b680ea1d69" (UID: "99b4b151-e965-4c8b-9a4b-22b680ea1d69"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.104849 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99b4b151-e965-4c8b-9a4b-22b680ea1d69-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "99b4b151-e965-4c8b-9a4b-22b680ea1d69" (UID: "99b4b151-e965-4c8b-9a4b-22b680ea1d69"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.181665 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xprj\" (UniqueName: \"kubernetes.io/projected/99b4b151-e965-4c8b-9a4b-22b680ea1d69-kube-api-access-8xprj\") on node \"crc\" DevicePath \"\"" Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.181703 4619 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99b4b151-e965-4c8b-9a4b-22b680ea1d69-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.181744 4619 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/99b4b151-e965-4c8b-9a4b-22b680ea1d69-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.456521 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wjld2" event={"ID":"99b4b151-e965-4c8b-9a4b-22b680ea1d69","Type":"ContainerDied","Data":"3fa2487c2cbb00239fb5bab97e6c30a0010c6d1a5f59658b3380bec7b856a9b4"} Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.457072 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fa2487c2cbb00239fb5bab97e6c30a0010c6d1a5f59658b3380bec7b856a9b4" Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.456659 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wjld2" Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.548771 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr"] Jan 26 11:18:27 crc kubenswrapper[4619]: E0126 11:18:27.549319 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99b4b151-e965-4c8b-9a4b-22b680ea1d69" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.549352 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="99b4b151-e965-4c8b-9a4b-22b680ea1d69" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.549730 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="99b4b151-e965-4c8b-9a4b-22b680ea1d69" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.550730 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr" Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.553283 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.553556 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.553877 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.554207 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fn84q" Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.570312 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr"] Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.691893 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12059c45-fc17-45cc-a061-a1b5ea704285-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr\" (UID: \"12059c45-fc17-45cc-a061-a1b5ea704285\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr" Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.692002 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12059c45-fc17-45cc-a061-a1b5ea704285-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr\" (UID: \"12059c45-fc17-45cc-a061-a1b5ea704285\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr" Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.692235 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12059c45-fc17-45cc-a061-a1b5ea704285-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr\" (UID: \"12059c45-fc17-45cc-a061-a1b5ea704285\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr" Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.692284 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6gn5\" (UniqueName: \"kubernetes.io/projected/12059c45-fc17-45cc-a061-a1b5ea704285-kube-api-access-j6gn5\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr\" (UID: \"12059c45-fc17-45cc-a061-a1b5ea704285\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr" Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.793673 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12059c45-fc17-45cc-a061-a1b5ea704285-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr\" (UID: \"12059c45-fc17-45cc-a061-a1b5ea704285\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr" Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.793810 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12059c45-fc17-45cc-a061-a1b5ea704285-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr\" (UID: \"12059c45-fc17-45cc-a061-a1b5ea704285\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr" Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.793838 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6gn5\" (UniqueName: \"kubernetes.io/projected/12059c45-fc17-45cc-a061-a1b5ea704285-kube-api-access-j6gn5\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr\" (UID: \"12059c45-fc17-45cc-a061-a1b5ea704285\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr" Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.793916 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12059c45-fc17-45cc-a061-a1b5ea704285-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr\" (UID: \"12059c45-fc17-45cc-a061-a1b5ea704285\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr" Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.798530 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12059c45-fc17-45cc-a061-a1b5ea704285-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr\" (UID: \"12059c45-fc17-45cc-a061-a1b5ea704285\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr" Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.798587 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12059c45-fc17-45cc-a061-a1b5ea704285-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr\" (UID: \"12059c45-fc17-45cc-a061-a1b5ea704285\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr" Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.799696 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12059c45-fc17-45cc-a061-a1b5ea704285-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr\" (UID: \"12059c45-fc17-45cc-a061-a1b5ea704285\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr" Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.831101 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6gn5\" (UniqueName: \"kubernetes.io/projected/12059c45-fc17-45cc-a061-a1b5ea704285-kube-api-access-j6gn5\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr\" (UID: \"12059c45-fc17-45cc-a061-a1b5ea704285\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr" Jan 26 11:18:27 crc kubenswrapper[4619]: I0126 11:18:27.885300 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr" Jan 26 11:18:28 crc kubenswrapper[4619]: I0126 11:18:28.433117 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr"] Jan 26 11:18:28 crc kubenswrapper[4619]: I0126 11:18:28.465231 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr" event={"ID":"12059c45-fc17-45cc-a061-a1b5ea704285","Type":"ContainerStarted","Data":"5b1a14b65345729a324fab9532310edb14da3cf8326a5152b75662bc7038d878"} Jan 26 11:18:30 crc kubenswrapper[4619]: I0126 11:18:30.052251 4619 scope.go:117] "RemoveContainer" containerID="53e0439f074a8027768c23f947352e1e64902410b671bdc635853612ce33cead" Jan 26 11:18:30 crc kubenswrapper[4619]: I0126 11:18:30.078279 4619 scope.go:117] "RemoveContainer" containerID="e2746dc4ae64e053c885b747e74e417691d616f5361dbe4b45706f04c337acef" Jan 26 11:18:30 crc kubenswrapper[4619]: I0126 11:18:30.122912 4619 scope.go:117] "RemoveContainer" containerID="0c05180dee73982b1b82b9183be6a9cc7ba38482eb009fb0bc9d40d05ebcd000" Jan 26 11:18:30 crc kubenswrapper[4619]: I0126 11:18:30.482303 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr" event={"ID":"12059c45-fc17-45cc-a061-a1b5ea704285","Type":"ContainerStarted","Data":"cd47a2091667bf93bbcb2f78afd46412992aae04bfd3b9569e9df8b7bb0bfa85"} Jan 26 11:18:30 crc kubenswrapper[4619]: I0126 11:18:30.500645 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr" podStartSLOduration=2.596994102 podStartE2EDuration="3.500607136s" podCreationTimestamp="2026-01-26 11:18:27 +0000 UTC" firstStartedPulling="2026-01-26 11:18:28.435737973 +0000 UTC m=+1407.469778689" lastFinishedPulling="2026-01-26 11:18:29.339351017 +0000 UTC m=+1408.373391723" observedRunningTime="2026-01-26 11:18:30.494121939 +0000 UTC m=+1409.528162655" watchObservedRunningTime="2026-01-26 11:18:30.500607136 +0000 UTC m=+1409.534647842" Jan 26 11:19:30 crc kubenswrapper[4619]: I0126 11:19:30.229925 4619 scope.go:117] "RemoveContainer" containerID="66a6d798b2456c3065d4a03defe6cbd9ab5e5227621f280fde91db675e2f83aa" Jan 26 11:19:30 crc kubenswrapper[4619]: I0126 11:19:30.267052 4619 scope.go:117] "RemoveContainer" containerID="ada190b479ee52f8303b817e9c1c2701293e633d99dd5836167d714d09c747ba" Jan 26 11:19:44 crc kubenswrapper[4619]: I0126 11:19:44.234560 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:19:44 crc kubenswrapper[4619]: I0126 11:19:44.235281 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:20:14 crc kubenswrapper[4619]: I0126 11:20:14.234276 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:20:14 crc kubenswrapper[4619]: I0126 11:20:14.234847 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:20:15 crc kubenswrapper[4619]: I0126 11:20:15.050518 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2gmrc"] Jan 26 11:20:15 crc kubenswrapper[4619]: I0126 11:20:15.053195 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2gmrc" Jan 26 11:20:15 crc kubenswrapper[4619]: I0126 11:20:15.098587 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2gmrc"] Jan 26 11:20:15 crc kubenswrapper[4619]: I0126 11:20:15.215520 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6475e8bf-a24c-413e-88a8-a2cde22524b4-catalog-content\") pod \"redhat-operators-2gmrc\" (UID: \"6475e8bf-a24c-413e-88a8-a2cde22524b4\") " pod="openshift-marketplace/redhat-operators-2gmrc" Jan 26 11:20:15 crc kubenswrapper[4619]: I0126 11:20:15.216774 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfmvn\" (UniqueName: \"kubernetes.io/projected/6475e8bf-a24c-413e-88a8-a2cde22524b4-kube-api-access-vfmvn\") pod \"redhat-operators-2gmrc\" (UID: \"6475e8bf-a24c-413e-88a8-a2cde22524b4\") " pod="openshift-marketplace/redhat-operators-2gmrc" Jan 26 11:20:15 crc kubenswrapper[4619]: I0126 11:20:15.216901 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6475e8bf-a24c-413e-88a8-a2cde22524b4-utilities\") pod \"redhat-operators-2gmrc\" (UID: \"6475e8bf-a24c-413e-88a8-a2cde22524b4\") " pod="openshift-marketplace/redhat-operators-2gmrc" Jan 26 11:20:15 crc kubenswrapper[4619]: I0126 11:20:15.318266 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6475e8bf-a24c-413e-88a8-a2cde22524b4-utilities\") pod \"redhat-operators-2gmrc\" (UID: \"6475e8bf-a24c-413e-88a8-a2cde22524b4\") " pod="openshift-marketplace/redhat-operators-2gmrc" Jan 26 11:20:15 crc kubenswrapper[4619]: I0126 11:20:15.318336 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6475e8bf-a24c-413e-88a8-a2cde22524b4-catalog-content\") pod \"redhat-operators-2gmrc\" (UID: \"6475e8bf-a24c-413e-88a8-a2cde22524b4\") " pod="openshift-marketplace/redhat-operators-2gmrc" Jan 26 11:20:15 crc kubenswrapper[4619]: I0126 11:20:15.318841 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfmvn\" (UniqueName: \"kubernetes.io/projected/6475e8bf-a24c-413e-88a8-a2cde22524b4-kube-api-access-vfmvn\") pod \"redhat-operators-2gmrc\" (UID: \"6475e8bf-a24c-413e-88a8-a2cde22524b4\") " pod="openshift-marketplace/redhat-operators-2gmrc" Jan 26 11:20:15 crc kubenswrapper[4619]: I0126 11:20:15.319183 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6475e8bf-a24c-413e-88a8-a2cde22524b4-catalog-content\") pod \"redhat-operators-2gmrc\" (UID: \"6475e8bf-a24c-413e-88a8-a2cde22524b4\") " pod="openshift-marketplace/redhat-operators-2gmrc" Jan 26 11:20:15 crc kubenswrapper[4619]: I0126 11:20:15.319828 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6475e8bf-a24c-413e-88a8-a2cde22524b4-utilities\") pod \"redhat-operators-2gmrc\" (UID: \"6475e8bf-a24c-413e-88a8-a2cde22524b4\") " pod="openshift-marketplace/redhat-operators-2gmrc" Jan 26 11:20:15 crc kubenswrapper[4619]: I0126 11:20:15.339977 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfmvn\" (UniqueName: \"kubernetes.io/projected/6475e8bf-a24c-413e-88a8-a2cde22524b4-kube-api-access-vfmvn\") pod \"redhat-operators-2gmrc\" (UID: \"6475e8bf-a24c-413e-88a8-a2cde22524b4\") " pod="openshift-marketplace/redhat-operators-2gmrc" Jan 26 11:20:15 crc kubenswrapper[4619]: I0126 11:20:15.388718 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2gmrc" Jan 26 11:20:15 crc kubenswrapper[4619]: I0126 11:20:15.838835 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2gmrc"] Jan 26 11:20:16 crc kubenswrapper[4619]: I0126 11:20:16.580498 4619 generic.go:334] "Generic (PLEG): container finished" podID="6475e8bf-a24c-413e-88a8-a2cde22524b4" containerID="545d9b9a21147a6b63608455cbbdc53b4be89a53270aa765916fe702db4360c5" exitCode=0 Jan 26 11:20:16 crc kubenswrapper[4619]: I0126 11:20:16.580652 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gmrc" event={"ID":"6475e8bf-a24c-413e-88a8-a2cde22524b4","Type":"ContainerDied","Data":"545d9b9a21147a6b63608455cbbdc53b4be89a53270aa765916fe702db4360c5"} Jan 26 11:20:16 crc kubenswrapper[4619]: I0126 11:20:16.580892 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gmrc" event={"ID":"6475e8bf-a24c-413e-88a8-a2cde22524b4","Type":"ContainerStarted","Data":"406baf68bda0ff2be1a70fc0b029855c34748f9588007fc51bfcef0fce5800c9"} Jan 26 11:20:18 crc kubenswrapper[4619]: I0126 11:20:18.601409 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gmrc" event={"ID":"6475e8bf-a24c-413e-88a8-a2cde22524b4","Type":"ContainerStarted","Data":"6773e3916d74eea9a76ba71d7469e4d81ab0dea1de29d96110bc338e292b4406"} Jan 26 11:20:21 crc kubenswrapper[4619]: I0126 11:20:21.632354 4619 generic.go:334] "Generic (PLEG): container finished" podID="6475e8bf-a24c-413e-88a8-a2cde22524b4" containerID="6773e3916d74eea9a76ba71d7469e4d81ab0dea1de29d96110bc338e292b4406" exitCode=0 Jan 26 11:20:21 crc kubenswrapper[4619]: I0126 11:20:21.632453 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gmrc" event={"ID":"6475e8bf-a24c-413e-88a8-a2cde22524b4","Type":"ContainerDied","Data":"6773e3916d74eea9a76ba71d7469e4d81ab0dea1de29d96110bc338e292b4406"} Jan 26 11:20:22 crc kubenswrapper[4619]: I0126 11:20:22.651148 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gmrc" event={"ID":"6475e8bf-a24c-413e-88a8-a2cde22524b4","Type":"ContainerStarted","Data":"6d2df14eb92742ba197260d13003bc2df8b5c14837f2b0909744941bace42758"} Jan 26 11:20:22 crc kubenswrapper[4619]: I0126 11:20:22.677189 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2gmrc" podStartSLOduration=1.8876918310000002 podStartE2EDuration="7.677169648s" podCreationTimestamp="2026-01-26 11:20:15 +0000 UTC" firstStartedPulling="2026-01-26 11:20:16.581967123 +0000 UTC m=+1515.616007839" lastFinishedPulling="2026-01-26 11:20:22.37144493 +0000 UTC m=+1521.405485656" observedRunningTime="2026-01-26 11:20:22.676041217 +0000 UTC m=+1521.710081943" watchObservedRunningTime="2026-01-26 11:20:22.677169648 +0000 UTC m=+1521.711210374" Jan 26 11:20:25 crc kubenswrapper[4619]: I0126 11:20:25.389523 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2gmrc" Jan 26 11:20:25 crc kubenswrapper[4619]: I0126 11:20:25.389845 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2gmrc" Jan 26 11:20:26 crc kubenswrapper[4619]: I0126 11:20:26.436398 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2gmrc" podUID="6475e8bf-a24c-413e-88a8-a2cde22524b4" containerName="registry-server" probeResult="failure" output=< Jan 26 11:20:26 crc kubenswrapper[4619]: timeout: failed to connect service ":50051" within 1s Jan 26 11:20:26 crc kubenswrapper[4619]: > Jan 26 11:20:30 crc kubenswrapper[4619]: I0126 11:20:30.335701 4619 scope.go:117] "RemoveContainer" containerID="b56fd8a7dbb2c8b1978a088101e7acddc67948bd399d538c2472832c6cffbd25" Jan 26 11:20:35 crc kubenswrapper[4619]: I0126 11:20:35.448173 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2gmrc" Jan 26 11:20:35 crc kubenswrapper[4619]: I0126 11:20:35.513408 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2gmrc" Jan 26 11:20:35 crc kubenswrapper[4619]: I0126 11:20:35.694448 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2gmrc"] Jan 26 11:20:36 crc kubenswrapper[4619]: I0126 11:20:36.764598 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2gmrc" podUID="6475e8bf-a24c-413e-88a8-a2cde22524b4" containerName="registry-server" containerID="cri-o://6d2df14eb92742ba197260d13003bc2df8b5c14837f2b0909744941bace42758" gracePeriod=2 Jan 26 11:20:37 crc kubenswrapper[4619]: I0126 11:20:37.206673 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2gmrc" Jan 26 11:20:37 crc kubenswrapper[4619]: I0126 11:20:37.348083 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6475e8bf-a24c-413e-88a8-a2cde22524b4-utilities\") pod \"6475e8bf-a24c-413e-88a8-a2cde22524b4\" (UID: \"6475e8bf-a24c-413e-88a8-a2cde22524b4\") " Jan 26 11:20:37 crc kubenswrapper[4619]: I0126 11:20:37.348378 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfmvn\" (UniqueName: \"kubernetes.io/projected/6475e8bf-a24c-413e-88a8-a2cde22524b4-kube-api-access-vfmvn\") pod \"6475e8bf-a24c-413e-88a8-a2cde22524b4\" (UID: \"6475e8bf-a24c-413e-88a8-a2cde22524b4\") " Jan 26 11:20:37 crc kubenswrapper[4619]: I0126 11:20:37.348814 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6475e8bf-a24c-413e-88a8-a2cde22524b4-catalog-content\") pod \"6475e8bf-a24c-413e-88a8-a2cde22524b4\" (UID: \"6475e8bf-a24c-413e-88a8-a2cde22524b4\") " Jan 26 11:20:37 crc kubenswrapper[4619]: I0126 11:20:37.349095 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6475e8bf-a24c-413e-88a8-a2cde22524b4-utilities" (OuterVolumeSpecName: "utilities") pod "6475e8bf-a24c-413e-88a8-a2cde22524b4" (UID: "6475e8bf-a24c-413e-88a8-a2cde22524b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:20:37 crc kubenswrapper[4619]: I0126 11:20:37.349467 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6475e8bf-a24c-413e-88a8-a2cde22524b4-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:20:37 crc kubenswrapper[4619]: I0126 11:20:37.354768 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6475e8bf-a24c-413e-88a8-a2cde22524b4-kube-api-access-vfmvn" (OuterVolumeSpecName: "kube-api-access-vfmvn") pod "6475e8bf-a24c-413e-88a8-a2cde22524b4" (UID: "6475e8bf-a24c-413e-88a8-a2cde22524b4"). InnerVolumeSpecName "kube-api-access-vfmvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:20:37 crc kubenswrapper[4619]: I0126 11:20:37.451233 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfmvn\" (UniqueName: \"kubernetes.io/projected/6475e8bf-a24c-413e-88a8-a2cde22524b4-kube-api-access-vfmvn\") on node \"crc\" DevicePath \"\"" Jan 26 11:20:37 crc kubenswrapper[4619]: I0126 11:20:37.473183 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6475e8bf-a24c-413e-88a8-a2cde22524b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6475e8bf-a24c-413e-88a8-a2cde22524b4" (UID: "6475e8bf-a24c-413e-88a8-a2cde22524b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:20:37 crc kubenswrapper[4619]: I0126 11:20:37.553362 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6475e8bf-a24c-413e-88a8-a2cde22524b4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:20:37 crc kubenswrapper[4619]: I0126 11:20:37.778252 4619 generic.go:334] "Generic (PLEG): container finished" podID="6475e8bf-a24c-413e-88a8-a2cde22524b4" containerID="6d2df14eb92742ba197260d13003bc2df8b5c14837f2b0909744941bace42758" exitCode=0 Jan 26 11:20:37 crc kubenswrapper[4619]: I0126 11:20:37.778296 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2gmrc" Jan 26 11:20:37 crc kubenswrapper[4619]: I0126 11:20:37.778303 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gmrc" event={"ID":"6475e8bf-a24c-413e-88a8-a2cde22524b4","Type":"ContainerDied","Data":"6d2df14eb92742ba197260d13003bc2df8b5c14837f2b0909744941bace42758"} Jan 26 11:20:37 crc kubenswrapper[4619]: I0126 11:20:37.778334 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gmrc" event={"ID":"6475e8bf-a24c-413e-88a8-a2cde22524b4","Type":"ContainerDied","Data":"406baf68bda0ff2be1a70fc0b029855c34748f9588007fc51bfcef0fce5800c9"} Jan 26 11:20:37 crc kubenswrapper[4619]: I0126 11:20:37.778352 4619 scope.go:117] "RemoveContainer" containerID="6d2df14eb92742ba197260d13003bc2df8b5c14837f2b0909744941bace42758" Jan 26 11:20:37 crc kubenswrapper[4619]: I0126 11:20:37.812838 4619 scope.go:117] "RemoveContainer" containerID="6773e3916d74eea9a76ba71d7469e4d81ab0dea1de29d96110bc338e292b4406" Jan 26 11:20:37 crc kubenswrapper[4619]: I0126 11:20:37.821843 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2gmrc"] Jan 26 11:20:37 crc kubenswrapper[4619]: I0126 11:20:37.836320 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2gmrc"] Jan 26 11:20:37 crc kubenswrapper[4619]: I0126 11:20:37.836470 4619 scope.go:117] "RemoveContainer" containerID="545d9b9a21147a6b63608455cbbdc53b4be89a53270aa765916fe702db4360c5" Jan 26 11:20:37 crc kubenswrapper[4619]: I0126 11:20:37.893247 4619 scope.go:117] "RemoveContainer" containerID="6d2df14eb92742ba197260d13003bc2df8b5c14837f2b0909744941bace42758" Jan 26 11:20:37 crc kubenswrapper[4619]: E0126 11:20:37.897999 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d2df14eb92742ba197260d13003bc2df8b5c14837f2b0909744941bace42758\": container with ID starting with 6d2df14eb92742ba197260d13003bc2df8b5c14837f2b0909744941bace42758 not found: ID does not exist" containerID="6d2df14eb92742ba197260d13003bc2df8b5c14837f2b0909744941bace42758" Jan 26 11:20:37 crc kubenswrapper[4619]: I0126 11:20:37.898049 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d2df14eb92742ba197260d13003bc2df8b5c14837f2b0909744941bace42758"} err="failed to get container status \"6d2df14eb92742ba197260d13003bc2df8b5c14837f2b0909744941bace42758\": rpc error: code = NotFound desc = could not find container \"6d2df14eb92742ba197260d13003bc2df8b5c14837f2b0909744941bace42758\": container with ID starting with 6d2df14eb92742ba197260d13003bc2df8b5c14837f2b0909744941bace42758 not found: ID does not exist" Jan 26 11:20:37 crc kubenswrapper[4619]: I0126 11:20:37.898083 4619 scope.go:117] "RemoveContainer" containerID="6773e3916d74eea9a76ba71d7469e4d81ab0dea1de29d96110bc338e292b4406" Jan 26 11:20:37 crc kubenswrapper[4619]: E0126 11:20:37.899192 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6773e3916d74eea9a76ba71d7469e4d81ab0dea1de29d96110bc338e292b4406\": container with ID starting with 6773e3916d74eea9a76ba71d7469e4d81ab0dea1de29d96110bc338e292b4406 not found: ID does not exist" containerID="6773e3916d74eea9a76ba71d7469e4d81ab0dea1de29d96110bc338e292b4406" Jan 26 11:20:37 crc kubenswrapper[4619]: I0126 11:20:37.899223 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6773e3916d74eea9a76ba71d7469e4d81ab0dea1de29d96110bc338e292b4406"} err="failed to get container status \"6773e3916d74eea9a76ba71d7469e4d81ab0dea1de29d96110bc338e292b4406\": rpc error: code = NotFound desc = could not find container \"6773e3916d74eea9a76ba71d7469e4d81ab0dea1de29d96110bc338e292b4406\": container with ID starting with 6773e3916d74eea9a76ba71d7469e4d81ab0dea1de29d96110bc338e292b4406 not found: ID does not exist" Jan 26 11:20:37 crc kubenswrapper[4619]: I0126 11:20:37.899243 4619 scope.go:117] "RemoveContainer" containerID="545d9b9a21147a6b63608455cbbdc53b4be89a53270aa765916fe702db4360c5" Jan 26 11:20:37 crc kubenswrapper[4619]: E0126 11:20:37.899456 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"545d9b9a21147a6b63608455cbbdc53b4be89a53270aa765916fe702db4360c5\": container with ID starting with 545d9b9a21147a6b63608455cbbdc53b4be89a53270aa765916fe702db4360c5 not found: ID does not exist" containerID="545d9b9a21147a6b63608455cbbdc53b4be89a53270aa765916fe702db4360c5" Jan 26 11:20:37 crc kubenswrapper[4619]: I0126 11:20:37.899490 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"545d9b9a21147a6b63608455cbbdc53b4be89a53270aa765916fe702db4360c5"} err="failed to get container status \"545d9b9a21147a6b63608455cbbdc53b4be89a53270aa765916fe702db4360c5\": rpc error: code = NotFound desc = could not find container \"545d9b9a21147a6b63608455cbbdc53b4be89a53270aa765916fe702db4360c5\": container with ID starting with 545d9b9a21147a6b63608455cbbdc53b4be89a53270aa765916fe702db4360c5 not found: ID does not exist" Jan 26 11:20:39 crc kubenswrapper[4619]: I0126 11:20:39.276987 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6475e8bf-a24c-413e-88a8-a2cde22524b4" path="/var/lib/kubelet/pods/6475e8bf-a24c-413e-88a8-a2cde22524b4/volumes" Jan 26 11:20:40 crc kubenswrapper[4619]: I0126 11:20:40.106491 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rkwj7"] Jan 26 11:20:40 crc kubenswrapper[4619]: E0126 11:20:40.106985 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6475e8bf-a24c-413e-88a8-a2cde22524b4" containerName="registry-server" Jan 26 11:20:40 crc kubenswrapper[4619]: I0126 11:20:40.107001 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="6475e8bf-a24c-413e-88a8-a2cde22524b4" containerName="registry-server" Jan 26 11:20:40 crc kubenswrapper[4619]: E0126 11:20:40.107062 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6475e8bf-a24c-413e-88a8-a2cde22524b4" containerName="extract-content" Jan 26 11:20:40 crc kubenswrapper[4619]: I0126 11:20:40.107069 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="6475e8bf-a24c-413e-88a8-a2cde22524b4" containerName="extract-content" Jan 26 11:20:40 crc kubenswrapper[4619]: E0126 11:20:40.107090 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6475e8bf-a24c-413e-88a8-a2cde22524b4" containerName="extract-utilities" Jan 26 11:20:40 crc kubenswrapper[4619]: I0126 11:20:40.107096 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="6475e8bf-a24c-413e-88a8-a2cde22524b4" containerName="extract-utilities" Jan 26 11:20:40 crc kubenswrapper[4619]: I0126 11:20:40.107345 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="6475e8bf-a24c-413e-88a8-a2cde22524b4" containerName="registry-server" Jan 26 11:20:40 crc kubenswrapper[4619]: I0126 11:20:40.109094 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkwj7" Jan 26 11:20:40 crc kubenswrapper[4619]: I0126 11:20:40.137084 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rkwj7"] Jan 26 11:20:40 crc kubenswrapper[4619]: I0126 11:20:40.214324 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed2195a2-ff3d-4f89-99fc-953f66ab06c7-catalog-content\") pod \"community-operators-rkwj7\" (UID: \"ed2195a2-ff3d-4f89-99fc-953f66ab06c7\") " pod="openshift-marketplace/community-operators-rkwj7" Jan 26 11:20:40 crc kubenswrapper[4619]: I0126 11:20:40.214376 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed2195a2-ff3d-4f89-99fc-953f66ab06c7-utilities\") pod \"community-operators-rkwj7\" (UID: \"ed2195a2-ff3d-4f89-99fc-953f66ab06c7\") " pod="openshift-marketplace/community-operators-rkwj7" Jan 26 11:20:40 crc kubenswrapper[4619]: I0126 11:20:40.214530 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prs6s\" (UniqueName: \"kubernetes.io/projected/ed2195a2-ff3d-4f89-99fc-953f66ab06c7-kube-api-access-prs6s\") pod \"community-operators-rkwj7\" (UID: \"ed2195a2-ff3d-4f89-99fc-953f66ab06c7\") " pod="openshift-marketplace/community-operators-rkwj7" Jan 26 11:20:40 crc kubenswrapper[4619]: I0126 11:20:40.316094 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prs6s\" (UniqueName: \"kubernetes.io/projected/ed2195a2-ff3d-4f89-99fc-953f66ab06c7-kube-api-access-prs6s\") pod \"community-operators-rkwj7\" (UID: \"ed2195a2-ff3d-4f89-99fc-953f66ab06c7\") " pod="openshift-marketplace/community-operators-rkwj7" Jan 26 11:20:40 crc kubenswrapper[4619]: I0126 11:20:40.316273 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed2195a2-ff3d-4f89-99fc-953f66ab06c7-catalog-content\") pod \"community-operators-rkwj7\" (UID: \"ed2195a2-ff3d-4f89-99fc-953f66ab06c7\") " pod="openshift-marketplace/community-operators-rkwj7" Jan 26 11:20:40 crc kubenswrapper[4619]: I0126 11:20:40.316296 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed2195a2-ff3d-4f89-99fc-953f66ab06c7-utilities\") pod \"community-operators-rkwj7\" (UID: \"ed2195a2-ff3d-4f89-99fc-953f66ab06c7\") " pod="openshift-marketplace/community-operators-rkwj7" Jan 26 11:20:40 crc kubenswrapper[4619]: I0126 11:20:40.316843 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed2195a2-ff3d-4f89-99fc-953f66ab06c7-catalog-content\") pod \"community-operators-rkwj7\" (UID: \"ed2195a2-ff3d-4f89-99fc-953f66ab06c7\") " pod="openshift-marketplace/community-operators-rkwj7" Jan 26 11:20:40 crc kubenswrapper[4619]: I0126 11:20:40.316887 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed2195a2-ff3d-4f89-99fc-953f66ab06c7-utilities\") pod \"community-operators-rkwj7\" (UID: \"ed2195a2-ff3d-4f89-99fc-953f66ab06c7\") " pod="openshift-marketplace/community-operators-rkwj7" Jan 26 11:20:40 crc kubenswrapper[4619]: I0126 11:20:40.343968 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prs6s\" (UniqueName: \"kubernetes.io/projected/ed2195a2-ff3d-4f89-99fc-953f66ab06c7-kube-api-access-prs6s\") pod \"community-operators-rkwj7\" (UID: \"ed2195a2-ff3d-4f89-99fc-953f66ab06c7\") " pod="openshift-marketplace/community-operators-rkwj7" Jan 26 11:20:40 crc kubenswrapper[4619]: I0126 11:20:40.437972 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkwj7" Jan 26 11:20:41 crc kubenswrapper[4619]: I0126 11:20:41.013581 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rkwj7"] Jan 26 11:20:41 crc kubenswrapper[4619]: I0126 11:20:41.813694 4619 generic.go:334] "Generic (PLEG): container finished" podID="ed2195a2-ff3d-4f89-99fc-953f66ab06c7" containerID="302d0ced79832059dbdc3d96cfb2109aa929d28a333294a7630313c7385d08ba" exitCode=0 Jan 26 11:20:41 crc kubenswrapper[4619]: I0126 11:20:41.813735 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkwj7" event={"ID":"ed2195a2-ff3d-4f89-99fc-953f66ab06c7","Type":"ContainerDied","Data":"302d0ced79832059dbdc3d96cfb2109aa929d28a333294a7630313c7385d08ba"} Jan 26 11:20:41 crc kubenswrapper[4619]: I0126 11:20:41.813761 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkwj7" event={"ID":"ed2195a2-ff3d-4f89-99fc-953f66ab06c7","Type":"ContainerStarted","Data":"4d950fcdb3fbd22e2da996b2d64a94351e102c0f3ea147d47da2bf42c0133042"} Jan 26 11:20:41 crc kubenswrapper[4619]: I0126 11:20:41.816777 4619 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 11:20:42 crc kubenswrapper[4619]: I0126 11:20:42.822533 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkwj7" event={"ID":"ed2195a2-ff3d-4f89-99fc-953f66ab06c7","Type":"ContainerStarted","Data":"74fb8f04e07bc7c2069c6524ef95a62f981fce3d7a15e9c54893e5ec5c1d05f2"} Jan 26 11:20:43 crc kubenswrapper[4619]: I0126 11:20:43.833824 4619 generic.go:334] "Generic (PLEG): container finished" podID="ed2195a2-ff3d-4f89-99fc-953f66ab06c7" containerID="74fb8f04e07bc7c2069c6524ef95a62f981fce3d7a15e9c54893e5ec5c1d05f2" exitCode=0 Jan 26 11:20:43 crc kubenswrapper[4619]: I0126 11:20:43.833857 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkwj7" event={"ID":"ed2195a2-ff3d-4f89-99fc-953f66ab06c7","Type":"ContainerDied","Data":"74fb8f04e07bc7c2069c6524ef95a62f981fce3d7a15e9c54893e5ec5c1d05f2"} Jan 26 11:20:44 crc kubenswrapper[4619]: I0126 11:20:44.234437 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:20:44 crc kubenswrapper[4619]: I0126 11:20:44.234514 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:20:44 crc kubenswrapper[4619]: I0126 11:20:44.234695 4619 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" Jan 26 11:20:44 crc kubenswrapper[4619]: I0126 11:20:44.235457 4619 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3654ed6a2adbc8c9b03f469d4fac0d668f99b333c07f0e11f135d0c00798b1fe"} pod="openshift-machine-config-operator/machine-config-daemon-28hd4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 11:20:44 crc kubenswrapper[4619]: I0126 11:20:44.235523 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" containerID="cri-o://3654ed6a2adbc8c9b03f469d4fac0d668f99b333c07f0e11f135d0c00798b1fe" gracePeriod=600 Jan 26 11:20:44 crc kubenswrapper[4619]: E0126 11:20:44.358562 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:20:44 crc kubenswrapper[4619]: I0126 11:20:44.843343 4619 generic.go:334] "Generic (PLEG): container finished" podID="f33a41bb-6406-4c73-8024-4acd72817832" containerID="3654ed6a2adbc8c9b03f469d4fac0d668f99b333c07f0e11f135d0c00798b1fe" exitCode=0 Jan 26 11:20:44 crc kubenswrapper[4619]: I0126 11:20:44.843408 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerDied","Data":"3654ed6a2adbc8c9b03f469d4fac0d668f99b333c07f0e11f135d0c00798b1fe"} Jan 26 11:20:44 crc kubenswrapper[4619]: I0126 11:20:44.843485 4619 scope.go:117] "RemoveContainer" containerID="a97c96f04b0e6279913de2f2c440a695c44ec0545531f13754f0d90f3a1c8d9f" Jan 26 11:20:44 crc kubenswrapper[4619]: I0126 11:20:44.844205 4619 scope.go:117] "RemoveContainer" containerID="3654ed6a2adbc8c9b03f469d4fac0d668f99b333c07f0e11f135d0c00798b1fe" Jan 26 11:20:44 crc kubenswrapper[4619]: E0126 11:20:44.844605 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:20:44 crc kubenswrapper[4619]: I0126 11:20:44.845457 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkwj7" event={"ID":"ed2195a2-ff3d-4f89-99fc-953f66ab06c7","Type":"ContainerStarted","Data":"26ef99dfaa04714c7b627266a6ad52abe74db2024d9c610a2dd28dfa0f63be46"} Jan 26 11:20:44 crc kubenswrapper[4619]: I0126 11:20:44.942275 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rkwj7" podStartSLOduration=2.537090134 podStartE2EDuration="4.941130913s" podCreationTimestamp="2026-01-26 11:20:40 +0000 UTC" firstStartedPulling="2026-01-26 11:20:41.816477348 +0000 UTC m=+1540.850518064" lastFinishedPulling="2026-01-26 11:20:44.220518127 +0000 UTC m=+1543.254558843" observedRunningTime="2026-01-26 11:20:44.894883992 +0000 UTC m=+1543.928924708" watchObservedRunningTime="2026-01-26 11:20:44.941130913 +0000 UTC m=+1543.975171629" Jan 26 11:20:45 crc kubenswrapper[4619]: I0126 11:20:45.908451 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ds8mw"] Jan 26 11:20:45 crc kubenswrapper[4619]: I0126 11:20:45.910441 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ds8mw" Jan 26 11:20:45 crc kubenswrapper[4619]: I0126 11:20:45.936963 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ds8mw"] Jan 26 11:20:46 crc kubenswrapper[4619]: I0126 11:20:46.028116 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9f4cc01-5a52-4246-8f8c-1593ba05a26a-catalog-content\") pod \"redhat-marketplace-ds8mw\" (UID: \"d9f4cc01-5a52-4246-8f8c-1593ba05a26a\") " pod="openshift-marketplace/redhat-marketplace-ds8mw" Jan 26 11:20:46 crc kubenswrapper[4619]: I0126 11:20:46.028302 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flg5m\" (UniqueName: \"kubernetes.io/projected/d9f4cc01-5a52-4246-8f8c-1593ba05a26a-kube-api-access-flg5m\") pod \"redhat-marketplace-ds8mw\" (UID: \"d9f4cc01-5a52-4246-8f8c-1593ba05a26a\") " pod="openshift-marketplace/redhat-marketplace-ds8mw" Jan 26 11:20:46 crc kubenswrapper[4619]: I0126 11:20:46.028352 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9f4cc01-5a52-4246-8f8c-1593ba05a26a-utilities\") pod \"redhat-marketplace-ds8mw\" (UID: \"d9f4cc01-5a52-4246-8f8c-1593ba05a26a\") " pod="openshift-marketplace/redhat-marketplace-ds8mw" Jan 26 11:20:46 crc kubenswrapper[4619]: I0126 11:20:46.129962 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flg5m\" (UniqueName: \"kubernetes.io/projected/d9f4cc01-5a52-4246-8f8c-1593ba05a26a-kube-api-access-flg5m\") pod \"redhat-marketplace-ds8mw\" (UID: \"d9f4cc01-5a52-4246-8f8c-1593ba05a26a\") " pod="openshift-marketplace/redhat-marketplace-ds8mw" Jan 26 11:20:46 crc kubenswrapper[4619]: I0126 11:20:46.130042 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9f4cc01-5a52-4246-8f8c-1593ba05a26a-utilities\") pod \"redhat-marketplace-ds8mw\" (UID: \"d9f4cc01-5a52-4246-8f8c-1593ba05a26a\") " pod="openshift-marketplace/redhat-marketplace-ds8mw" Jan 26 11:20:46 crc kubenswrapper[4619]: I0126 11:20:46.130098 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9f4cc01-5a52-4246-8f8c-1593ba05a26a-catalog-content\") pod \"redhat-marketplace-ds8mw\" (UID: \"d9f4cc01-5a52-4246-8f8c-1593ba05a26a\") " pod="openshift-marketplace/redhat-marketplace-ds8mw" Jan 26 11:20:46 crc kubenswrapper[4619]: I0126 11:20:46.130654 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9f4cc01-5a52-4246-8f8c-1593ba05a26a-utilities\") pod \"redhat-marketplace-ds8mw\" (UID: \"d9f4cc01-5a52-4246-8f8c-1593ba05a26a\") " pod="openshift-marketplace/redhat-marketplace-ds8mw" Jan 26 11:20:46 crc kubenswrapper[4619]: I0126 11:20:46.130700 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9f4cc01-5a52-4246-8f8c-1593ba05a26a-catalog-content\") pod \"redhat-marketplace-ds8mw\" (UID: \"d9f4cc01-5a52-4246-8f8c-1593ba05a26a\") " pod="openshift-marketplace/redhat-marketplace-ds8mw" Jan 26 11:20:46 crc kubenswrapper[4619]: I0126 11:20:46.148869 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flg5m\" (UniqueName: \"kubernetes.io/projected/d9f4cc01-5a52-4246-8f8c-1593ba05a26a-kube-api-access-flg5m\") pod \"redhat-marketplace-ds8mw\" (UID: \"d9f4cc01-5a52-4246-8f8c-1593ba05a26a\") " pod="openshift-marketplace/redhat-marketplace-ds8mw" Jan 26 11:20:46 crc kubenswrapper[4619]: I0126 11:20:46.239181 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ds8mw" Jan 26 11:20:46 crc kubenswrapper[4619]: I0126 11:20:46.752645 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ds8mw"] Jan 26 11:20:46 crc kubenswrapper[4619]: I0126 11:20:46.865727 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ds8mw" event={"ID":"d9f4cc01-5a52-4246-8f8c-1593ba05a26a","Type":"ContainerStarted","Data":"8f3585c12027142aafefa5e9d9fc45a4a03ec45b9d54034e92b74576b472c2c1"} Jan 26 11:20:47 crc kubenswrapper[4619]: I0126 11:20:47.875696 4619 generic.go:334] "Generic (PLEG): container finished" podID="d9f4cc01-5a52-4246-8f8c-1593ba05a26a" containerID="8d75582b4a52c4ecb9da5fdc53037dbaaaafd8fec1d1c4c94451335e6bf183ef" exitCode=0 Jan 26 11:20:47 crc kubenswrapper[4619]: I0126 11:20:47.875882 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ds8mw" event={"ID":"d9f4cc01-5a52-4246-8f8c-1593ba05a26a","Type":"ContainerDied","Data":"8d75582b4a52c4ecb9da5fdc53037dbaaaafd8fec1d1c4c94451335e6bf183ef"} Jan 26 11:20:48 crc kubenswrapper[4619]: I0126 11:20:48.887250 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ds8mw" event={"ID":"d9f4cc01-5a52-4246-8f8c-1593ba05a26a","Type":"ContainerStarted","Data":"60a0126feec11de54d30122e6f98703263d5ffe5bb3d49dfdd9ef072366d2555"} Jan 26 11:20:49 crc kubenswrapper[4619]: I0126 11:20:49.908298 4619 generic.go:334] "Generic (PLEG): container finished" podID="d9f4cc01-5a52-4246-8f8c-1593ba05a26a" containerID="60a0126feec11de54d30122e6f98703263d5ffe5bb3d49dfdd9ef072366d2555" exitCode=0 Jan 26 11:20:49 crc kubenswrapper[4619]: I0126 11:20:49.908485 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ds8mw" event={"ID":"d9f4cc01-5a52-4246-8f8c-1593ba05a26a","Type":"ContainerDied","Data":"60a0126feec11de54d30122e6f98703263d5ffe5bb3d49dfdd9ef072366d2555"} Jan 26 11:20:50 crc kubenswrapper[4619]: I0126 11:20:50.438232 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rkwj7" Jan 26 11:20:50 crc kubenswrapper[4619]: I0126 11:20:50.438390 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rkwj7" Jan 26 11:20:50 crc kubenswrapper[4619]: I0126 11:20:50.491859 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rkwj7" Jan 26 11:20:50 crc kubenswrapper[4619]: I0126 11:20:50.964291 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rkwj7" Jan 26 11:20:52 crc kubenswrapper[4619]: I0126 11:20:52.495068 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rkwj7"] Jan 26 11:20:52 crc kubenswrapper[4619]: I0126 11:20:52.938377 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rkwj7" podUID="ed2195a2-ff3d-4f89-99fc-953f66ab06c7" containerName="registry-server" containerID="cri-o://26ef99dfaa04714c7b627266a6ad52abe74db2024d9c610a2dd28dfa0f63be46" gracePeriod=2 Jan 26 11:20:52 crc kubenswrapper[4619]: I0126 11:20:52.938722 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ds8mw" event={"ID":"d9f4cc01-5a52-4246-8f8c-1593ba05a26a","Type":"ContainerStarted","Data":"b0914e803b875ea05b6b2dad4d44568bf4e9dc6f3baab639b117503c60d426ce"} Jan 26 11:20:52 crc kubenswrapper[4619]: I0126 11:20:52.968072 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ds8mw" podStartSLOduration=4.027480455 podStartE2EDuration="7.968052673s" podCreationTimestamp="2026-01-26 11:20:45 +0000 UTC" firstStartedPulling="2026-01-26 11:20:47.87877464 +0000 UTC m=+1546.912815356" lastFinishedPulling="2026-01-26 11:20:51.819346858 +0000 UTC m=+1550.853387574" observedRunningTime="2026-01-26 11:20:52.958510791 +0000 UTC m=+1551.992551517" watchObservedRunningTime="2026-01-26 11:20:52.968052673 +0000 UTC m=+1552.002093389" Jan 26 11:20:53 crc kubenswrapper[4619]: I0126 11:20:53.424666 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkwj7" Jan 26 11:20:53 crc kubenswrapper[4619]: I0126 11:20:53.478717 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed2195a2-ff3d-4f89-99fc-953f66ab06c7-utilities\") pod \"ed2195a2-ff3d-4f89-99fc-953f66ab06c7\" (UID: \"ed2195a2-ff3d-4f89-99fc-953f66ab06c7\") " Jan 26 11:20:53 crc kubenswrapper[4619]: I0126 11:20:53.478834 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed2195a2-ff3d-4f89-99fc-953f66ab06c7-catalog-content\") pod \"ed2195a2-ff3d-4f89-99fc-953f66ab06c7\" (UID: \"ed2195a2-ff3d-4f89-99fc-953f66ab06c7\") " Jan 26 11:20:53 crc kubenswrapper[4619]: I0126 11:20:53.479027 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prs6s\" (UniqueName: \"kubernetes.io/projected/ed2195a2-ff3d-4f89-99fc-953f66ab06c7-kube-api-access-prs6s\") pod \"ed2195a2-ff3d-4f89-99fc-953f66ab06c7\" (UID: \"ed2195a2-ff3d-4f89-99fc-953f66ab06c7\") " Jan 26 11:20:53 crc kubenswrapper[4619]: I0126 11:20:53.479587 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed2195a2-ff3d-4f89-99fc-953f66ab06c7-utilities" (OuterVolumeSpecName: "utilities") pod "ed2195a2-ff3d-4f89-99fc-953f66ab06c7" (UID: "ed2195a2-ff3d-4f89-99fc-953f66ab06c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:20:53 crc kubenswrapper[4619]: I0126 11:20:53.485011 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed2195a2-ff3d-4f89-99fc-953f66ab06c7-kube-api-access-prs6s" (OuterVolumeSpecName: "kube-api-access-prs6s") pod "ed2195a2-ff3d-4f89-99fc-953f66ab06c7" (UID: "ed2195a2-ff3d-4f89-99fc-953f66ab06c7"). InnerVolumeSpecName "kube-api-access-prs6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:20:53 crc kubenswrapper[4619]: I0126 11:20:53.537469 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed2195a2-ff3d-4f89-99fc-953f66ab06c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed2195a2-ff3d-4f89-99fc-953f66ab06c7" (UID: "ed2195a2-ff3d-4f89-99fc-953f66ab06c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:20:53 crc kubenswrapper[4619]: I0126 11:20:53.580669 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prs6s\" (UniqueName: \"kubernetes.io/projected/ed2195a2-ff3d-4f89-99fc-953f66ab06c7-kube-api-access-prs6s\") on node \"crc\" DevicePath \"\"" Jan 26 11:20:53 crc kubenswrapper[4619]: I0126 11:20:53.580701 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed2195a2-ff3d-4f89-99fc-953f66ab06c7-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:20:53 crc kubenswrapper[4619]: I0126 11:20:53.580713 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed2195a2-ff3d-4f89-99fc-953f66ab06c7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:20:53 crc kubenswrapper[4619]: I0126 11:20:53.953256 4619 generic.go:334] "Generic (PLEG): container finished" podID="ed2195a2-ff3d-4f89-99fc-953f66ab06c7" containerID="26ef99dfaa04714c7b627266a6ad52abe74db2024d9c610a2dd28dfa0f63be46" exitCode=0 Jan 26 11:20:53 crc kubenswrapper[4619]: I0126 11:20:53.953907 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkwj7" Jan 26 11:20:53 crc kubenswrapper[4619]: I0126 11:20:53.955243 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkwj7" event={"ID":"ed2195a2-ff3d-4f89-99fc-953f66ab06c7","Type":"ContainerDied","Data":"26ef99dfaa04714c7b627266a6ad52abe74db2024d9c610a2dd28dfa0f63be46"} Jan 26 11:20:53 crc kubenswrapper[4619]: I0126 11:20:53.955321 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkwj7" event={"ID":"ed2195a2-ff3d-4f89-99fc-953f66ab06c7","Type":"ContainerDied","Data":"4d950fcdb3fbd22e2da996b2d64a94351e102c0f3ea147d47da2bf42c0133042"} Jan 26 11:20:53 crc kubenswrapper[4619]: I0126 11:20:53.955365 4619 scope.go:117] "RemoveContainer" containerID="26ef99dfaa04714c7b627266a6ad52abe74db2024d9c610a2dd28dfa0f63be46" Jan 26 11:20:53 crc kubenswrapper[4619]: I0126 11:20:53.986539 4619 scope.go:117] "RemoveContainer" containerID="74fb8f04e07bc7c2069c6524ef95a62f981fce3d7a15e9c54893e5ec5c1d05f2" Jan 26 11:20:53 crc kubenswrapper[4619]: I0126 11:20:53.992061 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rkwj7"] Jan 26 11:20:54 crc kubenswrapper[4619]: I0126 11:20:54.001875 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rkwj7"] Jan 26 11:20:54 crc kubenswrapper[4619]: I0126 11:20:54.008524 4619 scope.go:117] "RemoveContainer" containerID="302d0ced79832059dbdc3d96cfb2109aa929d28a333294a7630313c7385d08ba" Jan 26 11:20:54 crc kubenswrapper[4619]: I0126 11:20:54.060032 4619 scope.go:117] "RemoveContainer" containerID="26ef99dfaa04714c7b627266a6ad52abe74db2024d9c610a2dd28dfa0f63be46" Jan 26 11:20:54 crc kubenswrapper[4619]: E0126 11:20:54.060469 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26ef99dfaa04714c7b627266a6ad52abe74db2024d9c610a2dd28dfa0f63be46\": container with ID starting with 26ef99dfaa04714c7b627266a6ad52abe74db2024d9c610a2dd28dfa0f63be46 not found: ID does not exist" containerID="26ef99dfaa04714c7b627266a6ad52abe74db2024d9c610a2dd28dfa0f63be46" Jan 26 11:20:54 crc kubenswrapper[4619]: I0126 11:20:54.060518 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26ef99dfaa04714c7b627266a6ad52abe74db2024d9c610a2dd28dfa0f63be46"} err="failed to get container status \"26ef99dfaa04714c7b627266a6ad52abe74db2024d9c610a2dd28dfa0f63be46\": rpc error: code = NotFound desc = could not find container \"26ef99dfaa04714c7b627266a6ad52abe74db2024d9c610a2dd28dfa0f63be46\": container with ID starting with 26ef99dfaa04714c7b627266a6ad52abe74db2024d9c610a2dd28dfa0f63be46 not found: ID does not exist" Jan 26 11:20:54 crc kubenswrapper[4619]: I0126 11:20:54.060552 4619 scope.go:117] "RemoveContainer" containerID="74fb8f04e07bc7c2069c6524ef95a62f981fce3d7a15e9c54893e5ec5c1d05f2" Jan 26 11:20:54 crc kubenswrapper[4619]: E0126 11:20:54.060974 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74fb8f04e07bc7c2069c6524ef95a62f981fce3d7a15e9c54893e5ec5c1d05f2\": container with ID starting with 74fb8f04e07bc7c2069c6524ef95a62f981fce3d7a15e9c54893e5ec5c1d05f2 not found: ID does not exist" containerID="74fb8f04e07bc7c2069c6524ef95a62f981fce3d7a15e9c54893e5ec5c1d05f2" Jan 26 11:20:54 crc kubenswrapper[4619]: I0126 11:20:54.061009 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74fb8f04e07bc7c2069c6524ef95a62f981fce3d7a15e9c54893e5ec5c1d05f2"} err="failed to get container status \"74fb8f04e07bc7c2069c6524ef95a62f981fce3d7a15e9c54893e5ec5c1d05f2\": rpc error: code = NotFound desc = could not find container \"74fb8f04e07bc7c2069c6524ef95a62f981fce3d7a15e9c54893e5ec5c1d05f2\": container with ID starting with 74fb8f04e07bc7c2069c6524ef95a62f981fce3d7a15e9c54893e5ec5c1d05f2 not found: ID does not exist" Jan 26 11:20:54 crc kubenswrapper[4619]: I0126 11:20:54.061030 4619 scope.go:117] "RemoveContainer" containerID="302d0ced79832059dbdc3d96cfb2109aa929d28a333294a7630313c7385d08ba" Jan 26 11:20:54 crc kubenswrapper[4619]: E0126 11:20:54.061323 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"302d0ced79832059dbdc3d96cfb2109aa929d28a333294a7630313c7385d08ba\": container with ID starting with 302d0ced79832059dbdc3d96cfb2109aa929d28a333294a7630313c7385d08ba not found: ID does not exist" containerID="302d0ced79832059dbdc3d96cfb2109aa929d28a333294a7630313c7385d08ba" Jan 26 11:20:54 crc kubenswrapper[4619]: I0126 11:20:54.061350 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"302d0ced79832059dbdc3d96cfb2109aa929d28a333294a7630313c7385d08ba"} err="failed to get container status \"302d0ced79832059dbdc3d96cfb2109aa929d28a333294a7630313c7385d08ba\": rpc error: code = NotFound desc = could not find container \"302d0ced79832059dbdc3d96cfb2109aa929d28a333294a7630313c7385d08ba\": container with ID starting with 302d0ced79832059dbdc3d96cfb2109aa929d28a333294a7630313c7385d08ba not found: ID does not exist" Jan 26 11:20:55 crc kubenswrapper[4619]: I0126 11:20:55.273592 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed2195a2-ff3d-4f89-99fc-953f66ab06c7" path="/var/lib/kubelet/pods/ed2195a2-ff3d-4f89-99fc-953f66ab06c7/volumes" Jan 26 11:20:56 crc kubenswrapper[4619]: I0126 11:20:56.240024 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ds8mw" Jan 26 11:20:56 crc kubenswrapper[4619]: I0126 11:20:56.240416 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ds8mw" Jan 26 11:20:56 crc kubenswrapper[4619]: I0126 11:20:56.303713 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ds8mw" Jan 26 11:20:57 crc kubenswrapper[4619]: I0126 11:20:57.060459 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ds8mw" Jan 26 11:20:57 crc kubenswrapper[4619]: I0126 11:20:57.697005 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ds8mw"] Jan 26 11:20:59 crc kubenswrapper[4619]: I0126 11:20:59.010699 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ds8mw" podUID="d9f4cc01-5a52-4246-8f8c-1593ba05a26a" containerName="registry-server" containerID="cri-o://b0914e803b875ea05b6b2dad4d44568bf4e9dc6f3baab639b117503c60d426ce" gracePeriod=2 Jan 26 11:20:59 crc kubenswrapper[4619]: I0126 11:20:59.261556 4619 scope.go:117] "RemoveContainer" containerID="3654ed6a2adbc8c9b03f469d4fac0d668f99b333c07f0e11f135d0c00798b1fe" Jan 26 11:20:59 crc kubenswrapper[4619]: E0126 11:20:59.261991 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:20:59 crc kubenswrapper[4619]: I0126 11:20:59.497061 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ds8mw" Jan 26 11:20:59 crc kubenswrapper[4619]: I0126 11:20:59.592003 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9f4cc01-5a52-4246-8f8c-1593ba05a26a-catalog-content\") pod \"d9f4cc01-5a52-4246-8f8c-1593ba05a26a\" (UID: \"d9f4cc01-5a52-4246-8f8c-1593ba05a26a\") " Jan 26 11:20:59 crc kubenswrapper[4619]: I0126 11:20:59.592108 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flg5m\" (UniqueName: \"kubernetes.io/projected/d9f4cc01-5a52-4246-8f8c-1593ba05a26a-kube-api-access-flg5m\") pod \"d9f4cc01-5a52-4246-8f8c-1593ba05a26a\" (UID: \"d9f4cc01-5a52-4246-8f8c-1593ba05a26a\") " Jan 26 11:20:59 crc kubenswrapper[4619]: I0126 11:20:59.592173 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9f4cc01-5a52-4246-8f8c-1593ba05a26a-utilities\") pod \"d9f4cc01-5a52-4246-8f8c-1593ba05a26a\" (UID: \"d9f4cc01-5a52-4246-8f8c-1593ba05a26a\") " Jan 26 11:20:59 crc kubenswrapper[4619]: I0126 11:20:59.593404 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9f4cc01-5a52-4246-8f8c-1593ba05a26a-utilities" (OuterVolumeSpecName: "utilities") pod "d9f4cc01-5a52-4246-8f8c-1593ba05a26a" (UID: "d9f4cc01-5a52-4246-8f8c-1593ba05a26a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:20:59 crc kubenswrapper[4619]: I0126 11:20:59.594031 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9f4cc01-5a52-4246-8f8c-1593ba05a26a-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:20:59 crc kubenswrapper[4619]: I0126 11:20:59.613852 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9f4cc01-5a52-4246-8f8c-1593ba05a26a-kube-api-access-flg5m" (OuterVolumeSpecName: "kube-api-access-flg5m") pod "d9f4cc01-5a52-4246-8f8c-1593ba05a26a" (UID: "d9f4cc01-5a52-4246-8f8c-1593ba05a26a"). InnerVolumeSpecName "kube-api-access-flg5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:20:59 crc kubenswrapper[4619]: I0126 11:20:59.618564 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9f4cc01-5a52-4246-8f8c-1593ba05a26a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d9f4cc01-5a52-4246-8f8c-1593ba05a26a" (UID: "d9f4cc01-5a52-4246-8f8c-1593ba05a26a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:20:59 crc kubenswrapper[4619]: I0126 11:20:59.695944 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9f4cc01-5a52-4246-8f8c-1593ba05a26a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:20:59 crc kubenswrapper[4619]: I0126 11:20:59.696001 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flg5m\" (UniqueName: \"kubernetes.io/projected/d9f4cc01-5a52-4246-8f8c-1593ba05a26a-kube-api-access-flg5m\") on node \"crc\" DevicePath \"\"" Jan 26 11:21:00 crc kubenswrapper[4619]: I0126 11:21:00.021542 4619 generic.go:334] "Generic (PLEG): container finished" podID="d9f4cc01-5a52-4246-8f8c-1593ba05a26a" containerID="b0914e803b875ea05b6b2dad4d44568bf4e9dc6f3baab639b117503c60d426ce" exitCode=0 Jan 26 11:21:00 crc kubenswrapper[4619]: I0126 11:21:00.021642 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ds8mw" Jan 26 11:21:00 crc kubenswrapper[4619]: I0126 11:21:00.021660 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ds8mw" event={"ID":"d9f4cc01-5a52-4246-8f8c-1593ba05a26a","Type":"ContainerDied","Data":"b0914e803b875ea05b6b2dad4d44568bf4e9dc6f3baab639b117503c60d426ce"} Jan 26 11:21:00 crc kubenswrapper[4619]: I0126 11:21:00.022717 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ds8mw" event={"ID":"d9f4cc01-5a52-4246-8f8c-1593ba05a26a","Type":"ContainerDied","Data":"8f3585c12027142aafefa5e9d9fc45a4a03ec45b9d54034e92b74576b472c2c1"} Jan 26 11:21:00 crc kubenswrapper[4619]: I0126 11:21:00.022802 4619 scope.go:117] "RemoveContainer" containerID="b0914e803b875ea05b6b2dad4d44568bf4e9dc6f3baab639b117503c60d426ce" Jan 26 11:21:00 crc kubenswrapper[4619]: I0126 11:21:00.045672 4619 scope.go:117] "RemoveContainer" containerID="60a0126feec11de54d30122e6f98703263d5ffe5bb3d49dfdd9ef072366d2555" Jan 26 11:21:00 crc kubenswrapper[4619]: I0126 11:21:00.059842 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ds8mw"] Jan 26 11:21:00 crc kubenswrapper[4619]: I0126 11:21:00.079179 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ds8mw"] Jan 26 11:21:00 crc kubenswrapper[4619]: I0126 11:21:00.083969 4619 scope.go:117] "RemoveContainer" containerID="8d75582b4a52c4ecb9da5fdc53037dbaaaafd8fec1d1c4c94451335e6bf183ef" Jan 26 11:21:00 crc kubenswrapper[4619]: I0126 11:21:00.126498 4619 scope.go:117] "RemoveContainer" containerID="b0914e803b875ea05b6b2dad4d44568bf4e9dc6f3baab639b117503c60d426ce" Jan 26 11:21:00 crc kubenswrapper[4619]: E0126 11:21:00.131351 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0914e803b875ea05b6b2dad4d44568bf4e9dc6f3baab639b117503c60d426ce\": container with ID starting with b0914e803b875ea05b6b2dad4d44568bf4e9dc6f3baab639b117503c60d426ce not found: ID does not exist" containerID="b0914e803b875ea05b6b2dad4d44568bf4e9dc6f3baab639b117503c60d426ce" Jan 26 11:21:00 crc kubenswrapper[4619]: I0126 11:21:00.131388 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0914e803b875ea05b6b2dad4d44568bf4e9dc6f3baab639b117503c60d426ce"} err="failed to get container status \"b0914e803b875ea05b6b2dad4d44568bf4e9dc6f3baab639b117503c60d426ce\": rpc error: code = NotFound desc = could not find container \"b0914e803b875ea05b6b2dad4d44568bf4e9dc6f3baab639b117503c60d426ce\": container with ID starting with b0914e803b875ea05b6b2dad4d44568bf4e9dc6f3baab639b117503c60d426ce not found: ID does not exist" Jan 26 11:21:00 crc kubenswrapper[4619]: I0126 11:21:00.131416 4619 scope.go:117] "RemoveContainer" containerID="60a0126feec11de54d30122e6f98703263d5ffe5bb3d49dfdd9ef072366d2555" Jan 26 11:21:00 crc kubenswrapper[4619]: E0126 11:21:00.132352 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60a0126feec11de54d30122e6f98703263d5ffe5bb3d49dfdd9ef072366d2555\": container with ID starting with 60a0126feec11de54d30122e6f98703263d5ffe5bb3d49dfdd9ef072366d2555 not found: ID does not exist" containerID="60a0126feec11de54d30122e6f98703263d5ffe5bb3d49dfdd9ef072366d2555" Jan 26 11:21:00 crc kubenswrapper[4619]: I0126 11:21:00.132371 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60a0126feec11de54d30122e6f98703263d5ffe5bb3d49dfdd9ef072366d2555"} err="failed to get container status \"60a0126feec11de54d30122e6f98703263d5ffe5bb3d49dfdd9ef072366d2555\": rpc error: code = NotFound desc = could not find container \"60a0126feec11de54d30122e6f98703263d5ffe5bb3d49dfdd9ef072366d2555\": container with ID starting with 60a0126feec11de54d30122e6f98703263d5ffe5bb3d49dfdd9ef072366d2555 not found: ID does not exist" Jan 26 11:21:00 crc kubenswrapper[4619]: I0126 11:21:00.132383 4619 scope.go:117] "RemoveContainer" containerID="8d75582b4a52c4ecb9da5fdc53037dbaaaafd8fec1d1c4c94451335e6bf183ef" Jan 26 11:21:00 crc kubenswrapper[4619]: E0126 11:21:00.132661 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d75582b4a52c4ecb9da5fdc53037dbaaaafd8fec1d1c4c94451335e6bf183ef\": container with ID starting with 8d75582b4a52c4ecb9da5fdc53037dbaaaafd8fec1d1c4c94451335e6bf183ef not found: ID does not exist" containerID="8d75582b4a52c4ecb9da5fdc53037dbaaaafd8fec1d1c4c94451335e6bf183ef" Jan 26 11:21:00 crc kubenswrapper[4619]: I0126 11:21:00.132676 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d75582b4a52c4ecb9da5fdc53037dbaaaafd8fec1d1c4c94451335e6bf183ef"} err="failed to get container status \"8d75582b4a52c4ecb9da5fdc53037dbaaaafd8fec1d1c4c94451335e6bf183ef\": rpc error: code = NotFound desc = could not find container \"8d75582b4a52c4ecb9da5fdc53037dbaaaafd8fec1d1c4c94451335e6bf183ef\": container with ID starting with 8d75582b4a52c4ecb9da5fdc53037dbaaaafd8fec1d1c4c94451335e6bf183ef not found: ID does not exist" Jan 26 11:21:01 crc kubenswrapper[4619]: I0126 11:21:01.281574 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9f4cc01-5a52-4246-8f8c-1593ba05a26a" path="/var/lib/kubelet/pods/d9f4cc01-5a52-4246-8f8c-1593ba05a26a/volumes" Jan 26 11:21:12 crc kubenswrapper[4619]: I0126 11:21:12.264104 4619 scope.go:117] "RemoveContainer" containerID="3654ed6a2adbc8c9b03f469d4fac0d668f99b333c07f0e11f135d0c00798b1fe" Jan 26 11:21:12 crc kubenswrapper[4619]: E0126 11:21:12.265158 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:21:24 crc kubenswrapper[4619]: I0126 11:21:24.261255 4619 scope.go:117] "RemoveContainer" containerID="3654ed6a2adbc8c9b03f469d4fac0d668f99b333c07f0e11f135d0c00798b1fe" Jan 26 11:21:24 crc kubenswrapper[4619]: E0126 11:21:24.262002 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:21:29 crc kubenswrapper[4619]: I0126 11:21:29.704954 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6xz85"] Jan 26 11:21:29 crc kubenswrapper[4619]: E0126 11:21:29.705913 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed2195a2-ff3d-4f89-99fc-953f66ab06c7" containerName="registry-server" Jan 26 11:21:29 crc kubenswrapper[4619]: I0126 11:21:29.705932 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed2195a2-ff3d-4f89-99fc-953f66ab06c7" containerName="registry-server" Jan 26 11:21:29 crc kubenswrapper[4619]: E0126 11:21:29.705952 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9f4cc01-5a52-4246-8f8c-1593ba05a26a" containerName="extract-content" Jan 26 11:21:29 crc kubenswrapper[4619]: I0126 11:21:29.705960 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9f4cc01-5a52-4246-8f8c-1593ba05a26a" containerName="extract-content" Jan 26 11:21:29 crc kubenswrapper[4619]: E0126 11:21:29.705972 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9f4cc01-5a52-4246-8f8c-1593ba05a26a" containerName="extract-utilities" Jan 26 11:21:29 crc kubenswrapper[4619]: I0126 11:21:29.705979 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9f4cc01-5a52-4246-8f8c-1593ba05a26a" containerName="extract-utilities" Jan 26 11:21:29 crc kubenswrapper[4619]: E0126 11:21:29.705997 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed2195a2-ff3d-4f89-99fc-953f66ab06c7" containerName="extract-utilities" Jan 26 11:21:29 crc kubenswrapper[4619]: I0126 11:21:29.706022 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed2195a2-ff3d-4f89-99fc-953f66ab06c7" containerName="extract-utilities" Jan 26 11:21:29 crc kubenswrapper[4619]: E0126 11:21:29.706036 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9f4cc01-5a52-4246-8f8c-1593ba05a26a" containerName="registry-server" Jan 26 11:21:29 crc kubenswrapper[4619]: I0126 11:21:29.706044 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9f4cc01-5a52-4246-8f8c-1593ba05a26a" containerName="registry-server" Jan 26 11:21:29 crc kubenswrapper[4619]: E0126 11:21:29.706062 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed2195a2-ff3d-4f89-99fc-953f66ab06c7" containerName="extract-content" Jan 26 11:21:29 crc kubenswrapper[4619]: I0126 11:21:29.706069 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed2195a2-ff3d-4f89-99fc-953f66ab06c7" containerName="extract-content" Jan 26 11:21:29 crc kubenswrapper[4619]: I0126 11:21:29.706304 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9f4cc01-5a52-4246-8f8c-1593ba05a26a" containerName="registry-server" Jan 26 11:21:29 crc kubenswrapper[4619]: I0126 11:21:29.706327 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed2195a2-ff3d-4f89-99fc-953f66ab06c7" containerName="registry-server" Jan 26 11:21:29 crc kubenswrapper[4619]: I0126 11:21:29.708027 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xz85" Jan 26 11:21:29 crc kubenswrapper[4619]: I0126 11:21:29.717353 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6xz85"] Jan 26 11:21:29 crc kubenswrapper[4619]: I0126 11:21:29.740169 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqf7n\" (UniqueName: \"kubernetes.io/projected/f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74-kube-api-access-fqf7n\") pod \"certified-operators-6xz85\" (UID: \"f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74\") " pod="openshift-marketplace/certified-operators-6xz85" Jan 26 11:21:29 crc kubenswrapper[4619]: I0126 11:21:29.740515 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74-catalog-content\") pod \"certified-operators-6xz85\" (UID: \"f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74\") " pod="openshift-marketplace/certified-operators-6xz85" Jan 26 11:21:29 crc kubenswrapper[4619]: I0126 11:21:29.740754 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74-utilities\") pod \"certified-operators-6xz85\" (UID: \"f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74\") " pod="openshift-marketplace/certified-operators-6xz85" Jan 26 11:21:29 crc kubenswrapper[4619]: I0126 11:21:29.842385 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74-utilities\") pod \"certified-operators-6xz85\" (UID: \"f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74\") " pod="openshift-marketplace/certified-operators-6xz85" Jan 26 11:21:29 crc kubenswrapper[4619]: I0126 11:21:29.842521 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqf7n\" (UniqueName: \"kubernetes.io/projected/f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74-kube-api-access-fqf7n\") pod \"certified-operators-6xz85\" (UID: \"f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74\") " pod="openshift-marketplace/certified-operators-6xz85" Jan 26 11:21:29 crc kubenswrapper[4619]: I0126 11:21:29.842947 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74-catalog-content\") pod \"certified-operators-6xz85\" (UID: \"f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74\") " pod="openshift-marketplace/certified-operators-6xz85" Jan 26 11:21:29 crc kubenswrapper[4619]: I0126 11:21:29.842945 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74-utilities\") pod \"certified-operators-6xz85\" (UID: \"f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74\") " pod="openshift-marketplace/certified-operators-6xz85" Jan 26 11:21:29 crc kubenswrapper[4619]: I0126 11:21:29.843323 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74-catalog-content\") pod \"certified-operators-6xz85\" (UID: \"f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74\") " pod="openshift-marketplace/certified-operators-6xz85" Jan 26 11:21:29 crc kubenswrapper[4619]: I0126 11:21:29.873558 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqf7n\" (UniqueName: \"kubernetes.io/projected/f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74-kube-api-access-fqf7n\") pod \"certified-operators-6xz85\" (UID: \"f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74\") " pod="openshift-marketplace/certified-operators-6xz85" Jan 26 11:21:30 crc kubenswrapper[4619]: I0126 11:21:30.032078 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xz85" Jan 26 11:21:30 crc kubenswrapper[4619]: I0126 11:21:30.542591 4619 scope.go:117] "RemoveContainer" containerID="d7c41d08f0808fe850195c144ddbd6902c46168c9cb71400eee4c7b842c77240" Jan 26 11:21:30 crc kubenswrapper[4619]: I0126 11:21:30.549595 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6xz85"] Jan 26 11:21:30 crc kubenswrapper[4619]: I0126 11:21:30.594931 4619 scope.go:117] "RemoveContainer" containerID="4e92dea6eb8b6739e7022ed5647ad9870f06a3ebe4b3280e2d612b1d759ba939" Jan 26 11:21:31 crc kubenswrapper[4619]: I0126 11:21:31.380349 4619 generic.go:334] "Generic (PLEG): container finished" podID="f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74" containerID="315ec983da6e1ef23643368a01e173a265398151e72726cd1aa71790722ff7e9" exitCode=0 Jan 26 11:21:31 crc kubenswrapper[4619]: I0126 11:21:31.380731 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xz85" event={"ID":"f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74","Type":"ContainerDied","Data":"315ec983da6e1ef23643368a01e173a265398151e72726cd1aa71790722ff7e9"} Jan 26 11:21:31 crc kubenswrapper[4619]: I0126 11:21:31.380760 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xz85" event={"ID":"f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74","Type":"ContainerStarted","Data":"6bcd082bd8a2bbbfef527c3981ac3d281d823cf354daaa4375206a17c0178df3"} Jan 26 11:21:32 crc kubenswrapper[4619]: I0126 11:21:32.397662 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xz85" event={"ID":"f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74","Type":"ContainerStarted","Data":"debf64315985e7a20fb2c72f0ee118a86b28b0390f0f864d5c4a8d8f5a61c414"} Jan 26 11:21:34 crc kubenswrapper[4619]: I0126 11:21:34.426120 4619 generic.go:334] "Generic (PLEG): container finished" podID="f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74" containerID="debf64315985e7a20fb2c72f0ee118a86b28b0390f0f864d5c4a8d8f5a61c414" exitCode=0 Jan 26 11:21:34 crc kubenswrapper[4619]: I0126 11:21:34.426399 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xz85" event={"ID":"f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74","Type":"ContainerDied","Data":"debf64315985e7a20fb2c72f0ee118a86b28b0390f0f864d5c4a8d8f5a61c414"} Jan 26 11:21:35 crc kubenswrapper[4619]: I0126 11:21:35.436985 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xz85" event={"ID":"f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74","Type":"ContainerStarted","Data":"533b07c169813cd7db91b0b1a742c7b903ca09369a4847f9db9323911871f1da"} Jan 26 11:21:37 crc kubenswrapper[4619]: I0126 11:21:37.261356 4619 scope.go:117] "RemoveContainer" containerID="3654ed6a2adbc8c9b03f469d4fac0d668f99b333c07f0e11f135d0c00798b1fe" Jan 26 11:21:37 crc kubenswrapper[4619]: E0126 11:21:37.261967 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:21:40 crc kubenswrapper[4619]: I0126 11:21:40.033009 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6xz85" Jan 26 11:21:40 crc kubenswrapper[4619]: I0126 11:21:40.033559 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6xz85" Jan 26 11:21:40 crc kubenswrapper[4619]: I0126 11:21:40.121549 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6xz85" Jan 26 11:21:40 crc kubenswrapper[4619]: I0126 11:21:40.149543 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6xz85" podStartSLOduration=7.724190157 podStartE2EDuration="11.149522711s" podCreationTimestamp="2026-01-26 11:21:29 +0000 UTC" firstStartedPulling="2026-01-26 11:21:31.382399507 +0000 UTC m=+1590.416440223" lastFinishedPulling="2026-01-26 11:21:34.807732051 +0000 UTC m=+1593.841772777" observedRunningTime="2026-01-26 11:21:35.467008192 +0000 UTC m=+1594.501048908" watchObservedRunningTime="2026-01-26 11:21:40.149522711 +0000 UTC m=+1599.183563427" Jan 26 11:21:40 crc kubenswrapper[4619]: I0126 11:21:40.555879 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6xz85" Jan 26 11:21:40 crc kubenswrapper[4619]: I0126 11:21:40.621363 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6xz85"] Jan 26 11:21:42 crc kubenswrapper[4619]: I0126 11:21:42.495845 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6xz85" podUID="f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74" containerName="registry-server" containerID="cri-o://533b07c169813cd7db91b0b1a742c7b903ca09369a4847f9db9323911871f1da" gracePeriod=2 Jan 26 11:21:42 crc kubenswrapper[4619]: I0126 11:21:42.981507 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xz85" Jan 26 11:21:43 crc kubenswrapper[4619]: I0126 11:21:43.120369 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqf7n\" (UniqueName: \"kubernetes.io/projected/f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74-kube-api-access-fqf7n\") pod \"f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74\" (UID: \"f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74\") " Jan 26 11:21:43 crc kubenswrapper[4619]: I0126 11:21:43.120779 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74-catalog-content\") pod \"f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74\" (UID: \"f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74\") " Jan 26 11:21:43 crc kubenswrapper[4619]: I0126 11:21:43.120947 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74-utilities\") pod \"f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74\" (UID: \"f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74\") " Jan 26 11:21:43 crc kubenswrapper[4619]: I0126 11:21:43.121705 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74-utilities" (OuterVolumeSpecName: "utilities") pod "f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74" (UID: "f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:21:43 crc kubenswrapper[4619]: I0126 11:21:43.134850 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74-kube-api-access-fqf7n" (OuterVolumeSpecName: "kube-api-access-fqf7n") pod "f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74" (UID: "f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74"). InnerVolumeSpecName "kube-api-access-fqf7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:21:43 crc kubenswrapper[4619]: I0126 11:21:43.173039 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74" (UID: "f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:21:43 crc kubenswrapper[4619]: I0126 11:21:43.223894 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:21:43 crc kubenswrapper[4619]: I0126 11:21:43.224116 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqf7n\" (UniqueName: \"kubernetes.io/projected/f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74-kube-api-access-fqf7n\") on node \"crc\" DevicePath \"\"" Jan 26 11:21:43 crc kubenswrapper[4619]: I0126 11:21:43.224249 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:21:43 crc kubenswrapper[4619]: I0126 11:21:43.508736 4619 generic.go:334] "Generic (PLEG): container finished" podID="f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74" containerID="533b07c169813cd7db91b0b1a742c7b903ca09369a4847f9db9323911871f1da" exitCode=0 Jan 26 11:21:43 crc kubenswrapper[4619]: I0126 11:21:43.508793 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xz85" event={"ID":"f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74","Type":"ContainerDied","Data":"533b07c169813cd7db91b0b1a742c7b903ca09369a4847f9db9323911871f1da"} Jan 26 11:21:43 crc kubenswrapper[4619]: I0126 11:21:43.508802 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xz85" Jan 26 11:21:43 crc kubenswrapper[4619]: I0126 11:21:43.508828 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xz85" event={"ID":"f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74","Type":"ContainerDied","Data":"6bcd082bd8a2bbbfef527c3981ac3d281d823cf354daaa4375206a17c0178df3"} Jan 26 11:21:43 crc kubenswrapper[4619]: I0126 11:21:43.508852 4619 scope.go:117] "RemoveContainer" containerID="533b07c169813cd7db91b0b1a742c7b903ca09369a4847f9db9323911871f1da" Jan 26 11:21:43 crc kubenswrapper[4619]: I0126 11:21:43.536517 4619 scope.go:117] "RemoveContainer" containerID="debf64315985e7a20fb2c72f0ee118a86b28b0390f0f864d5c4a8d8f5a61c414" Jan 26 11:21:43 crc kubenswrapper[4619]: I0126 11:21:43.560559 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6xz85"] Jan 26 11:21:43 crc kubenswrapper[4619]: I0126 11:21:43.560705 4619 scope.go:117] "RemoveContainer" containerID="315ec983da6e1ef23643368a01e173a265398151e72726cd1aa71790722ff7e9" Jan 26 11:21:43 crc kubenswrapper[4619]: I0126 11:21:43.570864 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6xz85"] Jan 26 11:21:43 crc kubenswrapper[4619]: I0126 11:21:43.628731 4619 scope.go:117] "RemoveContainer" containerID="533b07c169813cd7db91b0b1a742c7b903ca09369a4847f9db9323911871f1da" Jan 26 11:21:43 crc kubenswrapper[4619]: E0126 11:21:43.630509 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"533b07c169813cd7db91b0b1a742c7b903ca09369a4847f9db9323911871f1da\": container with ID starting with 533b07c169813cd7db91b0b1a742c7b903ca09369a4847f9db9323911871f1da not found: ID does not exist" containerID="533b07c169813cd7db91b0b1a742c7b903ca09369a4847f9db9323911871f1da" Jan 26 11:21:43 crc kubenswrapper[4619]: I0126 11:21:43.630577 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"533b07c169813cd7db91b0b1a742c7b903ca09369a4847f9db9323911871f1da"} err="failed to get container status \"533b07c169813cd7db91b0b1a742c7b903ca09369a4847f9db9323911871f1da\": rpc error: code = NotFound desc = could not find container \"533b07c169813cd7db91b0b1a742c7b903ca09369a4847f9db9323911871f1da\": container with ID starting with 533b07c169813cd7db91b0b1a742c7b903ca09369a4847f9db9323911871f1da not found: ID does not exist" Jan 26 11:21:43 crc kubenswrapper[4619]: I0126 11:21:43.630643 4619 scope.go:117] "RemoveContainer" containerID="debf64315985e7a20fb2c72f0ee118a86b28b0390f0f864d5c4a8d8f5a61c414" Jan 26 11:21:43 crc kubenswrapper[4619]: E0126 11:21:43.631379 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"debf64315985e7a20fb2c72f0ee118a86b28b0390f0f864d5c4a8d8f5a61c414\": container with ID starting with debf64315985e7a20fb2c72f0ee118a86b28b0390f0f864d5c4a8d8f5a61c414 not found: ID does not exist" containerID="debf64315985e7a20fb2c72f0ee118a86b28b0390f0f864d5c4a8d8f5a61c414" Jan 26 11:21:43 crc kubenswrapper[4619]: I0126 11:21:43.631473 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"debf64315985e7a20fb2c72f0ee118a86b28b0390f0f864d5c4a8d8f5a61c414"} err="failed to get container status \"debf64315985e7a20fb2c72f0ee118a86b28b0390f0f864d5c4a8d8f5a61c414\": rpc error: code = NotFound desc = could not find container \"debf64315985e7a20fb2c72f0ee118a86b28b0390f0f864d5c4a8d8f5a61c414\": container with ID starting with debf64315985e7a20fb2c72f0ee118a86b28b0390f0f864d5c4a8d8f5a61c414 not found: ID does not exist" Jan 26 11:21:43 crc kubenswrapper[4619]: I0126 11:21:43.631540 4619 scope.go:117] "RemoveContainer" containerID="315ec983da6e1ef23643368a01e173a265398151e72726cd1aa71790722ff7e9" Jan 26 11:21:43 crc kubenswrapper[4619]: E0126 11:21:43.632052 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"315ec983da6e1ef23643368a01e173a265398151e72726cd1aa71790722ff7e9\": container with ID starting with 315ec983da6e1ef23643368a01e173a265398151e72726cd1aa71790722ff7e9 not found: ID does not exist" containerID="315ec983da6e1ef23643368a01e173a265398151e72726cd1aa71790722ff7e9" Jan 26 11:21:43 crc kubenswrapper[4619]: I0126 11:21:43.632095 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"315ec983da6e1ef23643368a01e173a265398151e72726cd1aa71790722ff7e9"} err="failed to get container status \"315ec983da6e1ef23643368a01e173a265398151e72726cd1aa71790722ff7e9\": rpc error: code = NotFound desc = could not find container \"315ec983da6e1ef23643368a01e173a265398151e72726cd1aa71790722ff7e9\": container with ID starting with 315ec983da6e1ef23643368a01e173a265398151e72726cd1aa71790722ff7e9 not found: ID does not exist" Jan 26 11:21:45 crc kubenswrapper[4619]: I0126 11:21:45.280826 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74" path="/var/lib/kubelet/pods/f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74/volumes" Jan 26 11:21:52 crc kubenswrapper[4619]: I0126 11:21:52.261685 4619 scope.go:117] "RemoveContainer" containerID="3654ed6a2adbc8c9b03f469d4fac0d668f99b333c07f0e11f135d0c00798b1fe" Jan 26 11:21:52 crc kubenswrapper[4619]: E0126 11:21:52.262541 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:22:03 crc kubenswrapper[4619]: I0126 11:22:03.057119 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-fwhc7"] Jan 26 11:22:03 crc kubenswrapper[4619]: I0126 11:22:03.066731 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-175c-account-create-update-76xvz"] Jan 26 11:22:03 crc kubenswrapper[4619]: I0126 11:22:03.076022 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-lctj2"] Jan 26 11:22:03 crc kubenswrapper[4619]: I0126 11:22:03.084167 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-g9858"] Jan 26 11:22:03 crc kubenswrapper[4619]: I0126 11:22:03.093217 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5c5a-account-create-update-l6h4b"] Jan 26 11:22:03 crc kubenswrapper[4619]: I0126 11:22:03.101001 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-3208-account-create-update-cvfx5"] Jan 26 11:22:03 crc kubenswrapper[4619]: I0126 11:22:03.108174 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-fwhc7"] Jan 26 11:22:03 crc kubenswrapper[4619]: I0126 11:22:03.136457 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-g9858"] Jan 26 11:22:03 crc kubenswrapper[4619]: I0126 11:22:03.146077 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-175c-account-create-update-76xvz"] Jan 26 11:22:03 crc kubenswrapper[4619]: I0126 11:22:03.155650 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-lctj2"] Jan 26 11:22:03 crc kubenswrapper[4619]: I0126 11:22:03.165313 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-3208-account-create-update-cvfx5"] Jan 26 11:22:03 crc kubenswrapper[4619]: I0126 11:22:03.174184 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-5c5a-account-create-update-l6h4b"] Jan 26 11:22:03 crc kubenswrapper[4619]: I0126 11:22:03.271439 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2937b147-979a-43bf-8eab-467296040a2e" path="/var/lib/kubelet/pods/2937b147-979a-43bf-8eab-467296040a2e/volumes" Jan 26 11:22:03 crc kubenswrapper[4619]: I0126 11:22:03.273213 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe9f8cc-1a25-4194-ab2b-4693561283e1" path="/var/lib/kubelet/pods/5fe9f8cc-1a25-4194-ab2b-4693561283e1/volumes" Jan 26 11:22:03 crc kubenswrapper[4619]: I0126 11:22:03.274561 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97ff8170-a280-4bb7-88dd-21f76bb168f3" path="/var/lib/kubelet/pods/97ff8170-a280-4bb7-88dd-21f76bb168f3/volumes" Jan 26 11:22:03 crc kubenswrapper[4619]: I0126 11:22:03.275345 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5c70981-4cae-4170-a1ba-c7887aa5da2d" path="/var/lib/kubelet/pods/b5c70981-4cae-4170-a1ba-c7887aa5da2d/volumes" Jan 26 11:22:03 crc kubenswrapper[4619]: I0126 11:22:03.276650 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2" path="/var/lib/kubelet/pods/ccd507c4-fb34-4cbe-bfcf-cefc89f7bae2/volumes" Jan 26 11:22:03 crc kubenswrapper[4619]: I0126 11:22:03.277352 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e875d8ac-1f96-4846-8f29-cd00cf7d86fe" path="/var/lib/kubelet/pods/e875d8ac-1f96-4846-8f29-cd00cf7d86fe/volumes" Jan 26 11:22:04 crc kubenswrapper[4619]: I0126 11:22:04.260703 4619 scope.go:117] "RemoveContainer" containerID="3654ed6a2adbc8c9b03f469d4fac0d668f99b333c07f0e11f135d0c00798b1fe" Jan 26 11:22:04 crc kubenswrapper[4619]: E0126 11:22:04.261028 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:22:11 crc kubenswrapper[4619]: I0126 11:22:11.041554 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-swzd7"] Jan 26 11:22:11 crc kubenswrapper[4619]: I0126 11:22:11.050936 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-swzd7"] Jan 26 11:22:11 crc kubenswrapper[4619]: I0126 11:22:11.271037 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee547519-04e8-4ff3-be01-6d7783e3a4d3" path="/var/lib/kubelet/pods/ee547519-04e8-4ff3-be01-6d7783e3a4d3/volumes" Jan 26 11:22:19 crc kubenswrapper[4619]: I0126 11:22:19.261427 4619 scope.go:117] "RemoveContainer" containerID="3654ed6a2adbc8c9b03f469d4fac0d668f99b333c07f0e11f135d0c00798b1fe" Jan 26 11:22:19 crc kubenswrapper[4619]: E0126 11:22:19.262265 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:22:30 crc kubenswrapper[4619]: I0126 11:22:30.262436 4619 scope.go:117] "RemoveContainer" containerID="3654ed6a2adbc8c9b03f469d4fac0d668f99b333c07f0e11f135d0c00798b1fe" Jan 26 11:22:30 crc kubenswrapper[4619]: E0126 11:22:30.263574 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:22:30 crc kubenswrapper[4619]: I0126 11:22:30.728322 4619 scope.go:117] "RemoveContainer" containerID="e472c9ecd22b6f89b90410de3ac72a312dff62c05bd744b1790f4c74c60332b3" Jan 26 11:22:30 crc kubenswrapper[4619]: I0126 11:22:30.755445 4619 scope.go:117] "RemoveContainer" containerID="673fe103122cf652a0f5b926a3f7f696fcf3bdcf8dbb86d9b3505a7ad68c0492" Jan 26 11:22:30 crc kubenswrapper[4619]: I0126 11:22:30.793458 4619 scope.go:117] "RemoveContainer" containerID="5d22270acbca4e866d36321e1e44344aad360c39cb33700557dd0f735e485b16" Jan 26 11:22:30 crc kubenswrapper[4619]: I0126 11:22:30.857763 4619 scope.go:117] "RemoveContainer" containerID="240c0a32e20dc33444985d528805d5c6ff200e99fb9f87a1becaa491a9eeb853" Jan 26 11:22:30 crc kubenswrapper[4619]: I0126 11:22:30.898662 4619 scope.go:117] "RemoveContainer" containerID="93d3c48fb3d786f7ec6053258f0469cf2c48ebf19628ab140ea918ec05229ca5" Jan 26 11:22:30 crc kubenswrapper[4619]: I0126 11:22:30.971317 4619 scope.go:117] "RemoveContainer" containerID="d3a53c80677671d1350c62ef8936a1bfa5a9e5620a002d16bf33ea60a8f75e8a" Jan 26 11:22:31 crc kubenswrapper[4619]: I0126 11:22:31.004646 4619 scope.go:117] "RemoveContainer" containerID="feb5275205b3ec51260062f4b01098d1e1579f9c81cc200e97c8eea2bad8f970" Jan 26 11:22:32 crc kubenswrapper[4619]: I0126 11:22:32.046012 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-2n6m9"] Jan 26 11:22:32 crc kubenswrapper[4619]: I0126 11:22:32.065994 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-bf9c-account-create-update-r7s76"] Jan 26 11:22:32 crc kubenswrapper[4619]: I0126 11:22:32.075609 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-0751-account-create-update-6ddwf"] Jan 26 11:22:32 crc kubenswrapper[4619]: I0126 11:22:32.084316 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-d078-account-create-update-vhccn"] Jan 26 11:22:32 crc kubenswrapper[4619]: I0126 11:22:32.093417 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-l66xq"] Jan 26 11:22:32 crc kubenswrapper[4619]: I0126 11:22:32.106490 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-0751-account-create-update-6ddwf"] Jan 26 11:22:32 crc kubenswrapper[4619]: I0126 11:22:32.114405 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-d078-account-create-update-vhccn"] Jan 26 11:22:32 crc kubenswrapper[4619]: I0126 11:22:32.121671 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-bf9c-account-create-update-r7s76"] Jan 26 11:22:32 crc kubenswrapper[4619]: I0126 11:22:32.129691 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-l66xq"] Jan 26 11:22:32 crc kubenswrapper[4619]: I0126 11:22:32.137368 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-2n6m9"] Jan 26 11:22:33 crc kubenswrapper[4619]: I0126 11:22:33.025510 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-9qh4x"] Jan 26 11:22:33 crc kubenswrapper[4619]: I0126 11:22:33.036571 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-9qh4x"] Jan 26 11:22:33 crc kubenswrapper[4619]: I0126 11:22:33.271004 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a7a9429-31bf-450f-8bbc-5732f2073487" path="/var/lib/kubelet/pods/0a7a9429-31bf-450f-8bbc-5732f2073487/volumes" Jan 26 11:22:33 crc kubenswrapper[4619]: I0126 11:22:33.272218 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1efc4fce-ab7b-4156-aabe-47611db76dc4" path="/var/lib/kubelet/pods/1efc4fce-ab7b-4156-aabe-47611db76dc4/volumes" Jan 26 11:22:33 crc kubenswrapper[4619]: I0126 11:22:33.273364 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99" path="/var/lib/kubelet/pods/74f257e3-b4bd-4cb5-b9b0-a0179cbd9f99/volumes" Jan 26 11:22:33 crc kubenswrapper[4619]: I0126 11:22:33.274607 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="911c4b43-4424-4fcf-91ca-d1c4263dd12d" path="/var/lib/kubelet/pods/911c4b43-4424-4fcf-91ca-d1c4263dd12d/volumes" Jan 26 11:22:33 crc kubenswrapper[4619]: I0126 11:22:33.275920 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a025664f-5072-4d63-af0f-d1d92e019292" path="/var/lib/kubelet/pods/a025664f-5072-4d63-af0f-d1d92e019292/volumes" Jan 26 11:22:33 crc kubenswrapper[4619]: I0126 11:22:33.276691 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d96e2aac-889c-48f2-834b-2795a77c10d2" path="/var/lib/kubelet/pods/d96e2aac-889c-48f2-834b-2795a77c10d2/volumes" Jan 26 11:22:36 crc kubenswrapper[4619]: I0126 11:22:36.013451 4619 generic.go:334] "Generic (PLEG): container finished" podID="12059c45-fc17-45cc-a061-a1b5ea704285" containerID="cd47a2091667bf93bbcb2f78afd46412992aae04bfd3b9569e9df8b7bb0bfa85" exitCode=0 Jan 26 11:22:36 crc kubenswrapper[4619]: I0126 11:22:36.013551 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr" event={"ID":"12059c45-fc17-45cc-a061-a1b5ea704285","Type":"ContainerDied","Data":"cd47a2091667bf93bbcb2f78afd46412992aae04bfd3b9569e9df8b7bb0bfa85"} Jan 26 11:22:37 crc kubenswrapper[4619]: I0126 11:22:37.396902 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr" Jan 26 11:22:37 crc kubenswrapper[4619]: I0126 11:22:37.500478 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12059c45-fc17-45cc-a061-a1b5ea704285-bootstrap-combined-ca-bundle\") pod \"12059c45-fc17-45cc-a061-a1b5ea704285\" (UID: \"12059c45-fc17-45cc-a061-a1b5ea704285\") " Jan 26 11:22:37 crc kubenswrapper[4619]: I0126 11:22:37.500584 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12059c45-fc17-45cc-a061-a1b5ea704285-ssh-key-openstack-edpm-ipam\") pod \"12059c45-fc17-45cc-a061-a1b5ea704285\" (UID: \"12059c45-fc17-45cc-a061-a1b5ea704285\") " Jan 26 11:22:37 crc kubenswrapper[4619]: I0126 11:22:37.500765 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6gn5\" (UniqueName: \"kubernetes.io/projected/12059c45-fc17-45cc-a061-a1b5ea704285-kube-api-access-j6gn5\") pod \"12059c45-fc17-45cc-a061-a1b5ea704285\" (UID: \"12059c45-fc17-45cc-a061-a1b5ea704285\") " Jan 26 11:22:37 crc kubenswrapper[4619]: I0126 11:22:37.500800 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12059c45-fc17-45cc-a061-a1b5ea704285-inventory\") pod \"12059c45-fc17-45cc-a061-a1b5ea704285\" (UID: \"12059c45-fc17-45cc-a061-a1b5ea704285\") " Jan 26 11:22:37 crc kubenswrapper[4619]: I0126 11:22:37.529702 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12059c45-fc17-45cc-a061-a1b5ea704285-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "12059c45-fc17-45cc-a061-a1b5ea704285" (UID: "12059c45-fc17-45cc-a061-a1b5ea704285"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:22:37 crc kubenswrapper[4619]: I0126 11:22:37.533753 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12059c45-fc17-45cc-a061-a1b5ea704285-kube-api-access-j6gn5" (OuterVolumeSpecName: "kube-api-access-j6gn5") pod "12059c45-fc17-45cc-a061-a1b5ea704285" (UID: "12059c45-fc17-45cc-a061-a1b5ea704285"). InnerVolumeSpecName "kube-api-access-j6gn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:22:37 crc kubenswrapper[4619]: I0126 11:22:37.614336 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6gn5\" (UniqueName: \"kubernetes.io/projected/12059c45-fc17-45cc-a061-a1b5ea704285-kube-api-access-j6gn5\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:37 crc kubenswrapper[4619]: I0126 11:22:37.614388 4619 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12059c45-fc17-45cc-a061-a1b5ea704285-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:37 crc kubenswrapper[4619]: I0126 11:22:37.677033 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12059c45-fc17-45cc-a061-a1b5ea704285-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "12059c45-fc17-45cc-a061-a1b5ea704285" (UID: "12059c45-fc17-45cc-a061-a1b5ea704285"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:22:37 crc kubenswrapper[4619]: I0126 11:22:37.704793 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12059c45-fc17-45cc-a061-a1b5ea704285-inventory" (OuterVolumeSpecName: "inventory") pod "12059c45-fc17-45cc-a061-a1b5ea704285" (UID: "12059c45-fc17-45cc-a061-a1b5ea704285"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:22:37 crc kubenswrapper[4619]: I0126 11:22:37.715918 4619 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12059c45-fc17-45cc-a061-a1b5ea704285-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:37 crc kubenswrapper[4619]: I0126 11:22:37.715949 4619 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12059c45-fc17-45cc-a061-a1b5ea704285-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:38 crc kubenswrapper[4619]: I0126 11:22:38.047917 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-8bqkm"] Jan 26 11:22:38 crc kubenswrapper[4619]: I0126 11:22:38.059638 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-8bqkm"] Jan 26 11:22:38 crc kubenswrapper[4619]: I0126 11:22:38.134514 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l"] Jan 26 11:22:38 crc kubenswrapper[4619]: E0126 11:22:38.134902 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74" containerName="registry-server" Jan 26 11:22:38 crc kubenswrapper[4619]: I0126 11:22:38.134918 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74" containerName="registry-server" Jan 26 11:22:38 crc kubenswrapper[4619]: E0126 11:22:38.134928 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74" containerName="extract-utilities" Jan 26 11:22:38 crc kubenswrapper[4619]: I0126 11:22:38.134934 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74" containerName="extract-utilities" Jan 26 11:22:38 crc kubenswrapper[4619]: E0126 11:22:38.134952 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74" containerName="extract-content" Jan 26 11:22:38 crc kubenswrapper[4619]: I0126 11:22:38.134958 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74" containerName="extract-content" Jan 26 11:22:38 crc kubenswrapper[4619]: E0126 11:22:38.134979 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12059c45-fc17-45cc-a061-a1b5ea704285" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 11:22:38 crc kubenswrapper[4619]: I0126 11:22:38.134985 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="12059c45-fc17-45cc-a061-a1b5ea704285" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 11:22:38 crc kubenswrapper[4619]: I0126 11:22:38.135155 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="f17d3c5e-1a67-45a3-8f9e-5b0d9da29a74" containerName="registry-server" Jan 26 11:22:38 crc kubenswrapper[4619]: I0126 11:22:38.135173 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="12059c45-fc17-45cc-a061-a1b5ea704285" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 11:22:38 crc kubenswrapper[4619]: I0126 11:22:38.135739 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l" Jan 26 11:22:38 crc kubenswrapper[4619]: I0126 11:22:38.149195 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l"] Jan 26 11:22:38 crc kubenswrapper[4619]: I0126 11:22:38.223827 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s94lq\" (UniqueName: \"kubernetes.io/projected/ac31ccc2-07ae-4326-80dd-12b4e8393331-kube-api-access-s94lq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l\" (UID: \"ac31ccc2-07ae-4326-80dd-12b4e8393331\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l" Jan 26 11:22:38 crc kubenswrapper[4619]: I0126 11:22:38.223881 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac31ccc2-07ae-4326-80dd-12b4e8393331-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l\" (UID: \"ac31ccc2-07ae-4326-80dd-12b4e8393331\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l" Jan 26 11:22:38 crc kubenswrapper[4619]: I0126 11:22:38.224077 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac31ccc2-07ae-4326-80dd-12b4e8393331-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l\" (UID: \"ac31ccc2-07ae-4326-80dd-12b4e8393331\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l" Jan 26 11:22:38 crc kubenswrapper[4619]: I0126 11:22:38.237209 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr" event={"ID":"12059c45-fc17-45cc-a061-a1b5ea704285","Type":"ContainerDied","Data":"5b1a14b65345729a324fab9532310edb14da3cf8326a5152b75662bc7038d878"} Jan 26 11:22:38 crc kubenswrapper[4619]: I0126 11:22:38.237251 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b1a14b65345729a324fab9532310edb14da3cf8326a5152b75662bc7038d878" Jan 26 11:22:38 crc kubenswrapper[4619]: I0126 11:22:38.238329 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr" Jan 26 11:22:38 crc kubenswrapper[4619]: I0126 11:22:38.325897 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s94lq\" (UniqueName: \"kubernetes.io/projected/ac31ccc2-07ae-4326-80dd-12b4e8393331-kube-api-access-s94lq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l\" (UID: \"ac31ccc2-07ae-4326-80dd-12b4e8393331\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l" Jan 26 11:22:38 crc kubenswrapper[4619]: I0126 11:22:38.325996 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac31ccc2-07ae-4326-80dd-12b4e8393331-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l\" (UID: \"ac31ccc2-07ae-4326-80dd-12b4e8393331\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l" Jan 26 11:22:38 crc kubenswrapper[4619]: I0126 11:22:38.326105 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac31ccc2-07ae-4326-80dd-12b4e8393331-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l\" (UID: \"ac31ccc2-07ae-4326-80dd-12b4e8393331\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l" Jan 26 11:22:38 crc kubenswrapper[4619]: I0126 11:22:38.330741 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac31ccc2-07ae-4326-80dd-12b4e8393331-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l\" (UID: \"ac31ccc2-07ae-4326-80dd-12b4e8393331\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l" Jan 26 11:22:38 crc kubenswrapper[4619]: I0126 11:22:38.332208 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac31ccc2-07ae-4326-80dd-12b4e8393331-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l\" (UID: \"ac31ccc2-07ae-4326-80dd-12b4e8393331\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l" Jan 26 11:22:38 crc kubenswrapper[4619]: I0126 11:22:38.344749 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s94lq\" (UniqueName: \"kubernetes.io/projected/ac31ccc2-07ae-4326-80dd-12b4e8393331-kube-api-access-s94lq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l\" (UID: \"ac31ccc2-07ae-4326-80dd-12b4e8393331\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l" Jan 26 11:22:38 crc kubenswrapper[4619]: I0126 11:22:38.452733 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l" Jan 26 11:22:38 crc kubenswrapper[4619]: I0126 11:22:38.844190 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l"] Jan 26 11:22:39 crc kubenswrapper[4619]: I0126 11:22:39.248699 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l" event={"ID":"ac31ccc2-07ae-4326-80dd-12b4e8393331","Type":"ContainerStarted","Data":"6fa32847ef45559daeb444d7c0599fc6e3aaec1ea87cc15893f1691eeb950caf"} Jan 26 11:22:39 crc kubenswrapper[4619]: I0126 11:22:39.271240 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad779660-a430-4b5c-91dd-41b9582a4215" path="/var/lib/kubelet/pods/ad779660-a430-4b5c-91dd-41b9582a4215/volumes" Jan 26 11:22:40 crc kubenswrapper[4619]: I0126 11:22:40.258848 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l" event={"ID":"ac31ccc2-07ae-4326-80dd-12b4e8393331","Type":"ContainerStarted","Data":"8bcf93f1335520b9dca83e0281d46ed889962681fd2dd07b2f5f14d5af2c2e58"} Jan 26 11:22:40 crc kubenswrapper[4619]: I0126 11:22:40.274570 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l" podStartSLOduration=1.690790883 podStartE2EDuration="2.274551842s" podCreationTimestamp="2026-01-26 11:22:38 +0000 UTC" firstStartedPulling="2026-01-26 11:22:38.840306856 +0000 UTC m=+1657.874347572" lastFinishedPulling="2026-01-26 11:22:39.424067815 +0000 UTC m=+1658.458108531" observedRunningTime="2026-01-26 11:22:40.271816627 +0000 UTC m=+1659.305857343" watchObservedRunningTime="2026-01-26 11:22:40.274551842 +0000 UTC m=+1659.308592558" Jan 26 11:22:44 crc kubenswrapper[4619]: I0126 11:22:44.030156 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-74bc6"] Jan 26 11:22:44 crc kubenswrapper[4619]: I0126 11:22:44.041372 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-74bc6"] Jan 26 11:22:45 crc kubenswrapper[4619]: I0126 11:22:45.261648 4619 scope.go:117] "RemoveContainer" containerID="3654ed6a2adbc8c9b03f469d4fac0d668f99b333c07f0e11f135d0c00798b1fe" Jan 26 11:22:45 crc kubenswrapper[4619]: E0126 11:22:45.262390 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:22:45 crc kubenswrapper[4619]: I0126 11:22:45.273920 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8fa96fb-5f34-4f6c-932b-14420024f02d" path="/var/lib/kubelet/pods/d8fa96fb-5f34-4f6c-932b-14420024f02d/volumes" Jan 26 11:22:56 crc kubenswrapper[4619]: I0126 11:22:56.261990 4619 scope.go:117] "RemoveContainer" containerID="3654ed6a2adbc8c9b03f469d4fac0d668f99b333c07f0e11f135d0c00798b1fe" Jan 26 11:22:56 crc kubenswrapper[4619]: E0126 11:22:56.263196 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:23:07 crc kubenswrapper[4619]: I0126 11:23:07.261026 4619 scope.go:117] "RemoveContainer" containerID="3654ed6a2adbc8c9b03f469d4fac0d668f99b333c07f0e11f135d0c00798b1fe" Jan 26 11:23:07 crc kubenswrapper[4619]: E0126 11:23:07.261787 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:23:21 crc kubenswrapper[4619]: I0126 11:23:21.267994 4619 scope.go:117] "RemoveContainer" containerID="3654ed6a2adbc8c9b03f469d4fac0d668f99b333c07f0e11f135d0c00798b1fe" Jan 26 11:23:21 crc kubenswrapper[4619]: E0126 11:23:21.269074 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:23:26 crc kubenswrapper[4619]: I0126 11:23:26.055771 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-64h92"] Jan 26 11:23:26 crc kubenswrapper[4619]: I0126 11:23:26.062133 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-64h92"] Jan 26 11:23:27 crc kubenswrapper[4619]: I0126 11:23:27.280663 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76fffa56-d701-41ca-8a74-4c72015701f4" path="/var/lib/kubelet/pods/76fffa56-d701-41ca-8a74-4c72015701f4/volumes" Jan 26 11:23:31 crc kubenswrapper[4619]: I0126 11:23:31.036412 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-f7nh4"] Jan 26 11:23:31 crc kubenswrapper[4619]: I0126 11:23:31.046825 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-f7nh4"] Jan 26 11:23:31 crc kubenswrapper[4619]: I0126 11:23:31.183789 4619 scope.go:117] "RemoveContainer" containerID="20b1759c869725f0dcb69fe39834fbb9b37c82a13f1ace62b905491a934a6013" Jan 26 11:23:31 crc kubenswrapper[4619]: I0126 11:23:31.222976 4619 scope.go:117] "RemoveContainer" containerID="b3be53181c90b989cb6fefc6626a61dfce098ce102082e9faa2e5d9dc4b6b001" Jan 26 11:23:31 crc kubenswrapper[4619]: I0126 11:23:31.259360 4619 scope.go:117] "RemoveContainer" containerID="f6bea69ee45a26ee23ec1e0d4dc318e40aa1c6feca121f1040eb43bb5b01a077" Jan 26 11:23:31 crc kubenswrapper[4619]: I0126 11:23:31.273989 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d5b79a0-cb51-483b-96b0-8f85b385692c" path="/var/lib/kubelet/pods/9d5b79a0-cb51-483b-96b0-8f85b385692c/volumes" Jan 26 11:23:31 crc kubenswrapper[4619]: I0126 11:23:31.295785 4619 scope.go:117] "RemoveContainer" containerID="3cbfe2ad589d4c43ff7fb3422764371cce00dc7651b4039e7dc209aea8f34404" Jan 26 11:23:31 crc kubenswrapper[4619]: I0126 11:23:31.336894 4619 scope.go:117] "RemoveContainer" containerID="9dc902fa13ed66ab7c2dcd87b7b6ea31f0c59da47277b4b5f9c421710efa76de" Jan 26 11:23:31 crc kubenswrapper[4619]: I0126 11:23:31.386372 4619 scope.go:117] "RemoveContainer" containerID="0f60d824264b73a6cd33dcdee653553331bc81616e47b21c9610ac8f258d23b4" Jan 26 11:23:31 crc kubenswrapper[4619]: I0126 11:23:31.434054 4619 scope.go:117] "RemoveContainer" containerID="0852227a61d229754e598aba2008de299e2be76bfdb8865031a60afed1b61057" Jan 26 11:23:31 crc kubenswrapper[4619]: I0126 11:23:31.477593 4619 scope.go:117] "RemoveContainer" containerID="8960d0b808e54b3d3dcdbaab7921049adf220a66b3cc982d9e72064205a11648" Jan 26 11:23:31 crc kubenswrapper[4619]: I0126 11:23:31.500184 4619 scope.go:117] "RemoveContainer" containerID="a858cedeadd66a6f0470c5e2e7365be5d50df4d22b4a182bc51714ed8e7b701c" Jan 26 11:23:31 crc kubenswrapper[4619]: I0126 11:23:31.525324 4619 scope.go:117] "RemoveContainer" containerID="ac72704304b81f23b2ccc5ee50a81555259eff724527acbeb3f6eb42005bbf77" Jan 26 11:23:35 crc kubenswrapper[4619]: I0126 11:23:35.261700 4619 scope.go:117] "RemoveContainer" containerID="3654ed6a2adbc8c9b03f469d4fac0d668f99b333c07f0e11f135d0c00798b1fe" Jan 26 11:23:35 crc kubenswrapper[4619]: E0126 11:23:35.262498 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:23:38 crc kubenswrapper[4619]: I0126 11:23:38.052475 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-ltrbg"] Jan 26 11:23:38 crc kubenswrapper[4619]: I0126 11:23:38.060449 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-ltrbg"] Jan 26 11:23:39 crc kubenswrapper[4619]: I0126 11:23:39.275039 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3628aeb0-1e2e-4275-a914-31e18f47a989" path="/var/lib/kubelet/pods/3628aeb0-1e2e-4275-a914-31e18f47a989/volumes" Jan 26 11:23:47 crc kubenswrapper[4619]: I0126 11:23:47.262141 4619 scope.go:117] "RemoveContainer" containerID="3654ed6a2adbc8c9b03f469d4fac0d668f99b333c07f0e11f135d0c00798b1fe" Jan 26 11:23:47 crc kubenswrapper[4619]: E0126 11:23:47.263255 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:23:49 crc kubenswrapper[4619]: I0126 11:23:49.038550 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-vmzjm"] Jan 26 11:23:49 crc kubenswrapper[4619]: I0126 11:23:49.088393 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-vmzjm"] Jan 26 11:23:49 crc kubenswrapper[4619]: I0126 11:23:49.273809 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fe185f3-c64d-47a7-9c93-f40ef8d24d9e" path="/var/lib/kubelet/pods/4fe185f3-c64d-47a7-9c93-f40ef8d24d9e/volumes" Jan 26 11:23:51 crc kubenswrapper[4619]: I0126 11:23:51.028606 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-7zn9h"] Jan 26 11:23:51 crc kubenswrapper[4619]: I0126 11:23:51.036605 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-7zn9h"] Jan 26 11:23:51 crc kubenswrapper[4619]: I0126 11:23:51.275869 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42f56f30-76de-408b-bbe1-8ef2b764f26b" path="/var/lib/kubelet/pods/42f56f30-76de-408b-bbe1-8ef2b764f26b/volumes" Jan 26 11:24:02 crc kubenswrapper[4619]: I0126 11:24:02.260721 4619 scope.go:117] "RemoveContainer" containerID="3654ed6a2adbc8c9b03f469d4fac0d668f99b333c07f0e11f135d0c00798b1fe" Jan 26 11:24:02 crc kubenswrapper[4619]: E0126 11:24:02.261580 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:24:15 crc kubenswrapper[4619]: I0126 11:24:15.263474 4619 scope.go:117] "RemoveContainer" containerID="3654ed6a2adbc8c9b03f469d4fac0d668f99b333c07f0e11f135d0c00798b1fe" Jan 26 11:24:15 crc kubenswrapper[4619]: E0126 11:24:15.264415 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:24:27 crc kubenswrapper[4619]: I0126 11:24:27.262716 4619 scope.go:117] "RemoveContainer" containerID="3654ed6a2adbc8c9b03f469d4fac0d668f99b333c07f0e11f135d0c00798b1fe" Jan 26 11:24:27 crc kubenswrapper[4619]: E0126 11:24:27.263503 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:24:31 crc kubenswrapper[4619]: I0126 11:24:31.739902 4619 scope.go:117] "RemoveContainer" containerID="8ace1614ebcd4e162c21785ba58440191b5b058d2f8d798ed1d440fddc61c40a" Jan 26 11:24:31 crc kubenswrapper[4619]: I0126 11:24:31.802164 4619 scope.go:117] "RemoveContainer" containerID="fb61a5a65be44d00f88b4c6db055e747ee591a7ea7e16082937f513414454156" Jan 26 11:24:31 crc kubenswrapper[4619]: I0126 11:24:31.835022 4619 scope.go:117] "RemoveContainer" containerID="33cf5b414a5f2c4f75bea2cf04e5c3cc2aff136af62769a26d6566b5900430d1" Jan 26 11:24:38 crc kubenswrapper[4619]: I0126 11:24:38.261757 4619 scope.go:117] "RemoveContainer" containerID="3654ed6a2adbc8c9b03f469d4fac0d668f99b333c07f0e11f135d0c00798b1fe" Jan 26 11:24:38 crc kubenswrapper[4619]: E0126 11:24:38.263132 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:24:44 crc kubenswrapper[4619]: I0126 11:24:44.042250 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-9xp94"] Jan 26 11:24:44 crc kubenswrapper[4619]: I0126 11:24:44.050712 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-9xp94"] Jan 26 11:24:45 crc kubenswrapper[4619]: I0126 11:24:45.061132 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-6d7e-account-create-update-xmjsb"] Jan 26 11:24:45 crc kubenswrapper[4619]: I0126 11:24:45.073459 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-6d7e-account-create-update-xmjsb"] Jan 26 11:24:45 crc kubenswrapper[4619]: I0126 11:24:45.088929 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-n9nnv"] Jan 26 11:24:45 crc kubenswrapper[4619]: I0126 11:24:45.103083 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-6717-account-create-update-h2wzj"] Jan 26 11:24:45 crc kubenswrapper[4619]: I0126 11:24:45.114719 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-9c25-account-create-update-sgdh4"] Jan 26 11:24:45 crc kubenswrapper[4619]: I0126 11:24:45.124266 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-n9nnv"] Jan 26 11:24:45 crc kubenswrapper[4619]: I0126 11:24:45.135076 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-9c25-account-create-update-sgdh4"] Jan 26 11:24:45 crc kubenswrapper[4619]: I0126 11:24:45.142041 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-6717-account-create-update-h2wzj"] Jan 26 11:24:45 crc kubenswrapper[4619]: I0126 11:24:45.148393 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-g8k9m"] Jan 26 11:24:45 crc kubenswrapper[4619]: I0126 11:24:45.154359 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-g8k9m"] Jan 26 11:24:45 crc kubenswrapper[4619]: I0126 11:24:45.273487 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21d35b94-5e1e-4fd2-a2d7-40ca92101a54" path="/var/lib/kubelet/pods/21d35b94-5e1e-4fd2-a2d7-40ca92101a54/volumes" Jan 26 11:24:45 crc kubenswrapper[4619]: I0126 11:24:45.274358 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3168c764-1c23-47f7-ad80-20fe2f860ffd" path="/var/lib/kubelet/pods/3168c764-1c23-47f7-ad80-20fe2f860ffd/volumes" Jan 26 11:24:45 crc kubenswrapper[4619]: I0126 11:24:45.275065 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1bb0260-95f5-41fd-b051-0f122151a9c0" path="/var/lib/kubelet/pods/b1bb0260-95f5-41fd-b051-0f122151a9c0/volumes" Jan 26 11:24:45 crc kubenswrapper[4619]: I0126 11:24:45.275815 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3d77756-36ba-479e-8688-779283522d80" path="/var/lib/kubelet/pods/b3d77756-36ba-479e-8688-779283522d80/volumes" Jan 26 11:24:45 crc kubenswrapper[4619]: I0126 11:24:45.277228 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb30d86a-e144-4072-821a-f159e5dbdf31" path="/var/lib/kubelet/pods/cb30d86a-e144-4072-821a-f159e5dbdf31/volumes" Jan 26 11:24:45 crc kubenswrapper[4619]: I0126 11:24:45.277992 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffa8eff3-988a-4fe5-93b9-371636a0ae8f" path="/var/lib/kubelet/pods/ffa8eff3-988a-4fe5-93b9-371636a0ae8f/volumes" Jan 26 11:24:49 crc kubenswrapper[4619]: I0126 11:24:49.260794 4619 scope.go:117] "RemoveContainer" containerID="3654ed6a2adbc8c9b03f469d4fac0d668f99b333c07f0e11f135d0c00798b1fe" Jan 26 11:24:49 crc kubenswrapper[4619]: E0126 11:24:49.261460 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:25:02 crc kubenswrapper[4619]: I0126 11:25:02.262833 4619 scope.go:117] "RemoveContainer" containerID="3654ed6a2adbc8c9b03f469d4fac0d668f99b333c07f0e11f135d0c00798b1fe" Jan 26 11:25:02 crc kubenswrapper[4619]: E0126 11:25:02.264035 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:25:12 crc kubenswrapper[4619]: I0126 11:25:12.703064 4619 generic.go:334] "Generic (PLEG): container finished" podID="ac31ccc2-07ae-4326-80dd-12b4e8393331" containerID="8bcf93f1335520b9dca83e0281d46ed889962681fd2dd07b2f5f14d5af2c2e58" exitCode=0 Jan 26 11:25:12 crc kubenswrapper[4619]: I0126 11:25:12.703276 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l" event={"ID":"ac31ccc2-07ae-4326-80dd-12b4e8393331","Type":"ContainerDied","Data":"8bcf93f1335520b9dca83e0281d46ed889962681fd2dd07b2f5f14d5af2c2e58"} Jan 26 11:25:13 crc kubenswrapper[4619]: I0126 11:25:13.261216 4619 scope.go:117] "RemoveContainer" containerID="3654ed6a2adbc8c9b03f469d4fac0d668f99b333c07f0e11f135d0c00798b1fe" Jan 26 11:25:13 crc kubenswrapper[4619]: E0126 11:25:13.261511 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:25:14 crc kubenswrapper[4619]: I0126 11:25:14.064968 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zjsfd"] Jan 26 11:25:14 crc kubenswrapper[4619]: I0126 11:25:14.073530 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zjsfd"] Jan 26 11:25:14 crc kubenswrapper[4619]: I0126 11:25:14.229146 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l" Jan 26 11:25:14 crc kubenswrapper[4619]: I0126 11:25:14.319207 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s94lq\" (UniqueName: \"kubernetes.io/projected/ac31ccc2-07ae-4326-80dd-12b4e8393331-kube-api-access-s94lq\") pod \"ac31ccc2-07ae-4326-80dd-12b4e8393331\" (UID: \"ac31ccc2-07ae-4326-80dd-12b4e8393331\") " Jan 26 11:25:14 crc kubenswrapper[4619]: I0126 11:25:14.319318 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac31ccc2-07ae-4326-80dd-12b4e8393331-ssh-key-openstack-edpm-ipam\") pod \"ac31ccc2-07ae-4326-80dd-12b4e8393331\" (UID: \"ac31ccc2-07ae-4326-80dd-12b4e8393331\") " Jan 26 11:25:14 crc kubenswrapper[4619]: I0126 11:25:14.319385 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac31ccc2-07ae-4326-80dd-12b4e8393331-inventory\") pod \"ac31ccc2-07ae-4326-80dd-12b4e8393331\" (UID: \"ac31ccc2-07ae-4326-80dd-12b4e8393331\") " Jan 26 11:25:14 crc kubenswrapper[4619]: I0126 11:25:14.340767 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac31ccc2-07ae-4326-80dd-12b4e8393331-kube-api-access-s94lq" (OuterVolumeSpecName: "kube-api-access-s94lq") pod "ac31ccc2-07ae-4326-80dd-12b4e8393331" (UID: "ac31ccc2-07ae-4326-80dd-12b4e8393331"). InnerVolumeSpecName "kube-api-access-s94lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:25:14 crc kubenswrapper[4619]: I0126 11:25:14.359794 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac31ccc2-07ae-4326-80dd-12b4e8393331-inventory" (OuterVolumeSpecName: "inventory") pod "ac31ccc2-07ae-4326-80dd-12b4e8393331" (UID: "ac31ccc2-07ae-4326-80dd-12b4e8393331"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:25:14 crc kubenswrapper[4619]: I0126 11:25:14.366051 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac31ccc2-07ae-4326-80dd-12b4e8393331-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ac31ccc2-07ae-4326-80dd-12b4e8393331" (UID: "ac31ccc2-07ae-4326-80dd-12b4e8393331"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:25:14 crc kubenswrapper[4619]: I0126 11:25:14.423231 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s94lq\" (UniqueName: \"kubernetes.io/projected/ac31ccc2-07ae-4326-80dd-12b4e8393331-kube-api-access-s94lq\") on node \"crc\" DevicePath \"\"" Jan 26 11:25:14 crc kubenswrapper[4619]: I0126 11:25:14.423263 4619 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac31ccc2-07ae-4326-80dd-12b4e8393331-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 11:25:14 crc kubenswrapper[4619]: I0126 11:25:14.423274 4619 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac31ccc2-07ae-4326-80dd-12b4e8393331-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 11:25:14 crc kubenswrapper[4619]: I0126 11:25:14.720911 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l" event={"ID":"ac31ccc2-07ae-4326-80dd-12b4e8393331","Type":"ContainerDied","Data":"6fa32847ef45559daeb444d7c0599fc6e3aaec1ea87cc15893f1691eeb950caf"} Jan 26 11:25:14 crc kubenswrapper[4619]: I0126 11:25:14.721226 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fa32847ef45559daeb444d7c0599fc6e3aaec1ea87cc15893f1691eeb950caf" Jan 26 11:25:14 crc kubenswrapper[4619]: I0126 11:25:14.721281 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l" Jan 26 11:25:14 crc kubenswrapper[4619]: I0126 11:25:14.845441 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb"] Jan 26 11:25:14 crc kubenswrapper[4619]: E0126 11:25:14.845821 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac31ccc2-07ae-4326-80dd-12b4e8393331" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 26 11:25:14 crc kubenswrapper[4619]: I0126 11:25:14.845838 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac31ccc2-07ae-4326-80dd-12b4e8393331" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 26 11:25:14 crc kubenswrapper[4619]: I0126 11:25:14.846036 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac31ccc2-07ae-4326-80dd-12b4e8393331" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 26 11:25:14 crc kubenswrapper[4619]: I0126 11:25:14.846632 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb" Jan 26 11:25:14 crc kubenswrapper[4619]: I0126 11:25:14.857275 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb"] Jan 26 11:25:14 crc kubenswrapper[4619]: I0126 11:25:14.860442 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fn84q" Jan 26 11:25:14 crc kubenswrapper[4619]: I0126 11:25:14.860547 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 11:25:14 crc kubenswrapper[4619]: I0126 11:25:14.860442 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 11:25:14 crc kubenswrapper[4619]: I0126 11:25:14.860794 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 11:25:14 crc kubenswrapper[4619]: I0126 11:25:14.932129 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/352a4117-3bba-4714-a367-916874cba86f-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb\" (UID: \"352a4117-3bba-4714-a367-916874cba86f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb" Jan 26 11:25:14 crc kubenswrapper[4619]: I0126 11:25:14.932394 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/352a4117-3bba-4714-a367-916874cba86f-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb\" (UID: \"352a4117-3bba-4714-a367-916874cba86f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb" Jan 26 11:25:14 crc kubenswrapper[4619]: I0126 11:25:14.932697 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bms9s\" (UniqueName: \"kubernetes.io/projected/352a4117-3bba-4714-a367-916874cba86f-kube-api-access-bms9s\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb\" (UID: \"352a4117-3bba-4714-a367-916874cba86f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb" Jan 26 11:25:15 crc kubenswrapper[4619]: I0126 11:25:15.034800 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/352a4117-3bba-4714-a367-916874cba86f-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb\" (UID: \"352a4117-3bba-4714-a367-916874cba86f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb" Jan 26 11:25:15 crc kubenswrapper[4619]: I0126 11:25:15.035077 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/352a4117-3bba-4714-a367-916874cba86f-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb\" (UID: \"352a4117-3bba-4714-a367-916874cba86f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb" Jan 26 11:25:15 crc kubenswrapper[4619]: I0126 11:25:15.035250 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bms9s\" (UniqueName: \"kubernetes.io/projected/352a4117-3bba-4714-a367-916874cba86f-kube-api-access-bms9s\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb\" (UID: \"352a4117-3bba-4714-a367-916874cba86f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb" Jan 26 11:25:15 crc kubenswrapper[4619]: I0126 11:25:15.038814 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/352a4117-3bba-4714-a367-916874cba86f-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb\" (UID: \"352a4117-3bba-4714-a367-916874cba86f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb" Jan 26 11:25:15 crc kubenswrapper[4619]: I0126 11:25:15.047186 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/352a4117-3bba-4714-a367-916874cba86f-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb\" (UID: \"352a4117-3bba-4714-a367-916874cba86f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb" Jan 26 11:25:15 crc kubenswrapper[4619]: I0126 11:25:15.058459 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bms9s\" (UniqueName: \"kubernetes.io/projected/352a4117-3bba-4714-a367-916874cba86f-kube-api-access-bms9s\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb\" (UID: \"352a4117-3bba-4714-a367-916874cba86f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb" Jan 26 11:25:15 crc kubenswrapper[4619]: I0126 11:25:15.179049 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb" Jan 26 11:25:15 crc kubenswrapper[4619]: I0126 11:25:15.300990 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5f7823e-9371-4a62-b554-9b30b3bb3483" path="/var/lib/kubelet/pods/b5f7823e-9371-4a62-b554-9b30b3bb3483/volumes" Jan 26 11:25:15 crc kubenswrapper[4619]: I0126 11:25:15.721602 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb"] Jan 26 11:25:16 crc kubenswrapper[4619]: I0126 11:25:16.748821 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb" event={"ID":"352a4117-3bba-4714-a367-916874cba86f","Type":"ContainerStarted","Data":"c4cf9cf0535b7d4e240c7c5fdd11af0b72ed9d3cd749dea09d264a8919042b36"} Jan 26 11:25:17 crc kubenswrapper[4619]: I0126 11:25:17.759976 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb" event={"ID":"352a4117-3bba-4714-a367-916874cba86f","Type":"ContainerStarted","Data":"b576b017575e19a7b3c782aea8d40ffbc114bc462bae333eb909f9f2e70fe635"} Jan 26 11:25:17 crc kubenswrapper[4619]: I0126 11:25:17.778205 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb" podStartSLOduration=3.080835392 podStartE2EDuration="3.778187676s" podCreationTimestamp="2026-01-26 11:25:14 +0000 UTC" firstStartedPulling="2026-01-26 11:25:15.73616442 +0000 UTC m=+1814.770205146" lastFinishedPulling="2026-01-26 11:25:16.433516724 +0000 UTC m=+1815.467557430" observedRunningTime="2026-01-26 11:25:17.776734926 +0000 UTC m=+1816.810775652" watchObservedRunningTime="2026-01-26 11:25:17.778187676 +0000 UTC m=+1816.812228392" Jan 26 11:25:25 crc kubenswrapper[4619]: I0126 11:25:25.261526 4619 scope.go:117] "RemoveContainer" containerID="3654ed6a2adbc8c9b03f469d4fac0d668f99b333c07f0e11f135d0c00798b1fe" Jan 26 11:25:25 crc kubenswrapper[4619]: E0126 11:25:25.262471 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:25:31 crc kubenswrapper[4619]: I0126 11:25:31.973370 4619 scope.go:117] "RemoveContainer" containerID="42881539eefcbd3d0935cefff96205634895b18de1c1066bc0b66d0e8e8223c1" Jan 26 11:25:31 crc kubenswrapper[4619]: I0126 11:25:31.998092 4619 scope.go:117] "RemoveContainer" containerID="82340cc71d3000dc912ce9d5066620b2a8b74bb08f9f99e329572e593f9a9b8b" Jan 26 11:25:32 crc kubenswrapper[4619]: I0126 11:25:32.068723 4619 scope.go:117] "RemoveContainer" containerID="2b489812844dcb7e10e1adf0b8b04a1b53fdd8833b52af2a74b9b6f1392c8dd9" Jan 26 11:25:32 crc kubenswrapper[4619]: I0126 11:25:32.111198 4619 scope.go:117] "RemoveContainer" containerID="5c9246b5938c0c3e73b72f0031d0db1c3c89ba9889e682cc21f04c103124fc72" Jan 26 11:25:32 crc kubenswrapper[4619]: I0126 11:25:32.152057 4619 scope.go:117] "RemoveContainer" containerID="fe9298eb491e1aac696e8b00278e5b70a29d4bb586f884898f817f7ecea3085d" Jan 26 11:25:32 crc kubenswrapper[4619]: I0126 11:25:32.197031 4619 scope.go:117] "RemoveContainer" containerID="bd886717fa35c8e8478a4f3b20f429dfeab42c99c0bf638b7d19f0f79b3a8c4b" Jan 26 11:25:32 crc kubenswrapper[4619]: I0126 11:25:32.249420 4619 scope.go:117] "RemoveContainer" containerID="15b133ebb79bdccdd304699b3317aeb6f1d27be7ab886144cafb6b655d9c65eb" Jan 26 11:25:34 crc kubenswrapper[4619]: I0126 11:25:34.045298 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-x6sj5"] Jan 26 11:25:34 crc kubenswrapper[4619]: I0126 11:25:34.056487 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-s6glk"] Jan 26 11:25:34 crc kubenswrapper[4619]: I0126 11:25:34.068321 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-x6sj5"] Jan 26 11:25:34 crc kubenswrapper[4619]: I0126 11:25:34.076155 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-s6glk"] Jan 26 11:25:35 crc kubenswrapper[4619]: I0126 11:25:35.273221 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6460284b-1cb6-444b-a2f7-676f38e03a78" path="/var/lib/kubelet/pods/6460284b-1cb6-444b-a2f7-676f38e03a78/volumes" Jan 26 11:25:35 crc kubenswrapper[4619]: I0126 11:25:35.273888 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d93c669-300a-4954-b044-df49960ba3f0" path="/var/lib/kubelet/pods/8d93c669-300a-4954-b044-df49960ba3f0/volumes" Jan 26 11:25:40 crc kubenswrapper[4619]: I0126 11:25:40.261530 4619 scope.go:117] "RemoveContainer" containerID="3654ed6a2adbc8c9b03f469d4fac0d668f99b333c07f0e11f135d0c00798b1fe" Jan 26 11:25:40 crc kubenswrapper[4619]: E0126 11:25:40.262857 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:25:53 crc kubenswrapper[4619]: I0126 11:25:53.261280 4619 scope.go:117] "RemoveContainer" containerID="3654ed6a2adbc8c9b03f469d4fac0d668f99b333c07f0e11f135d0c00798b1fe" Jan 26 11:25:54 crc kubenswrapper[4619]: I0126 11:25:54.076300 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerStarted","Data":"48466c6ecf3b810bf1e304c5501f651d0eed5c6b8b657b951b311faf79acbb89"} Jan 26 11:26:16 crc kubenswrapper[4619]: I0126 11:26:16.041263 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-lvbr8"] Jan 26 11:26:16 crc kubenswrapper[4619]: I0126 11:26:16.052765 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-lvbr8"] Jan 26 11:26:17 crc kubenswrapper[4619]: I0126 11:26:17.276408 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f578a895-e882-4e3b-9ef7-3e1d5b14a13f" path="/var/lib/kubelet/pods/f578a895-e882-4e3b-9ef7-3e1d5b14a13f/volumes" Jan 26 11:26:32 crc kubenswrapper[4619]: I0126 11:26:32.388938 4619 scope.go:117] "RemoveContainer" containerID="570fc1861584d28033a84f8f3a28cfc3146ab376a6a71722dc1634536d4ecd74" Jan 26 11:26:32 crc kubenswrapper[4619]: I0126 11:26:32.430087 4619 scope.go:117] "RemoveContainer" containerID="a5be8eb5920d8eb878a8fb03dc681fd467edd376b013b6859ce64b040dad4b0d" Jan 26 11:26:32 crc kubenswrapper[4619]: I0126 11:26:32.478881 4619 scope.go:117] "RemoveContainer" containerID="6815efff8cbe4552e31f853e8e4bf22750b69edf18cbd96a5c30176cde068444" Jan 26 11:26:39 crc kubenswrapper[4619]: I0126 11:26:39.475129 4619 generic.go:334] "Generic (PLEG): container finished" podID="352a4117-3bba-4714-a367-916874cba86f" containerID="b576b017575e19a7b3c782aea8d40ffbc114bc462bae333eb909f9f2e70fe635" exitCode=0 Jan 26 11:26:39 crc kubenswrapper[4619]: I0126 11:26:39.475243 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb" event={"ID":"352a4117-3bba-4714-a367-916874cba86f","Type":"ContainerDied","Data":"b576b017575e19a7b3c782aea8d40ffbc114bc462bae333eb909f9f2e70fe635"} Jan 26 11:26:40 crc kubenswrapper[4619]: I0126 11:26:40.887268 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb" Jan 26 11:26:41 crc kubenswrapper[4619]: I0126 11:26:41.044304 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bms9s\" (UniqueName: \"kubernetes.io/projected/352a4117-3bba-4714-a367-916874cba86f-kube-api-access-bms9s\") pod \"352a4117-3bba-4714-a367-916874cba86f\" (UID: \"352a4117-3bba-4714-a367-916874cba86f\") " Jan 26 11:26:41 crc kubenswrapper[4619]: I0126 11:26:41.044421 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/352a4117-3bba-4714-a367-916874cba86f-inventory\") pod \"352a4117-3bba-4714-a367-916874cba86f\" (UID: \"352a4117-3bba-4714-a367-916874cba86f\") " Jan 26 11:26:41 crc kubenswrapper[4619]: I0126 11:26:41.044498 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/352a4117-3bba-4714-a367-916874cba86f-ssh-key-openstack-edpm-ipam\") pod \"352a4117-3bba-4714-a367-916874cba86f\" (UID: \"352a4117-3bba-4714-a367-916874cba86f\") " Jan 26 11:26:41 crc kubenswrapper[4619]: I0126 11:26:41.053384 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/352a4117-3bba-4714-a367-916874cba86f-kube-api-access-bms9s" (OuterVolumeSpecName: "kube-api-access-bms9s") pod "352a4117-3bba-4714-a367-916874cba86f" (UID: "352a4117-3bba-4714-a367-916874cba86f"). InnerVolumeSpecName "kube-api-access-bms9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:26:41 crc kubenswrapper[4619]: I0126 11:26:41.079143 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/352a4117-3bba-4714-a367-916874cba86f-inventory" (OuterVolumeSpecName: "inventory") pod "352a4117-3bba-4714-a367-916874cba86f" (UID: "352a4117-3bba-4714-a367-916874cba86f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:26:41 crc kubenswrapper[4619]: I0126 11:26:41.079537 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/352a4117-3bba-4714-a367-916874cba86f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "352a4117-3bba-4714-a367-916874cba86f" (UID: "352a4117-3bba-4714-a367-916874cba86f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:26:41 crc kubenswrapper[4619]: I0126 11:26:41.146444 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bms9s\" (UniqueName: \"kubernetes.io/projected/352a4117-3bba-4714-a367-916874cba86f-kube-api-access-bms9s\") on node \"crc\" DevicePath \"\"" Jan 26 11:26:41 crc kubenswrapper[4619]: I0126 11:26:41.146479 4619 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/352a4117-3bba-4714-a367-916874cba86f-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 11:26:41 crc kubenswrapper[4619]: I0126 11:26:41.146492 4619 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/352a4117-3bba-4714-a367-916874cba86f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 11:26:41 crc kubenswrapper[4619]: I0126 11:26:41.493094 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb" event={"ID":"352a4117-3bba-4714-a367-916874cba86f","Type":"ContainerDied","Data":"c4cf9cf0535b7d4e240c7c5fdd11af0b72ed9d3cd749dea09d264a8919042b36"} Jan 26 11:26:41 crc kubenswrapper[4619]: I0126 11:26:41.493188 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4cf9cf0535b7d4e240c7c5fdd11af0b72ed9d3cd749dea09d264a8919042b36" Jan 26 11:26:41 crc kubenswrapper[4619]: I0126 11:26:41.493200 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb" Jan 26 11:26:41 crc kubenswrapper[4619]: I0126 11:26:41.587134 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qzt54"] Jan 26 11:26:41 crc kubenswrapper[4619]: E0126 11:26:41.587587 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="352a4117-3bba-4714-a367-916874cba86f" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 11:26:41 crc kubenswrapper[4619]: I0126 11:26:41.587612 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="352a4117-3bba-4714-a367-916874cba86f" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 11:26:41 crc kubenswrapper[4619]: I0126 11:26:41.587805 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="352a4117-3bba-4714-a367-916874cba86f" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 11:26:41 crc kubenswrapper[4619]: I0126 11:26:41.588385 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qzt54" Jan 26 11:26:41 crc kubenswrapper[4619]: I0126 11:26:41.593685 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 11:26:41 crc kubenswrapper[4619]: I0126 11:26:41.593768 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fn84q" Jan 26 11:26:41 crc kubenswrapper[4619]: I0126 11:26:41.593895 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 11:26:41 crc kubenswrapper[4619]: I0126 11:26:41.594033 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 11:26:41 crc kubenswrapper[4619]: I0126 11:26:41.598115 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qzt54"] Jan 26 11:26:41 crc kubenswrapper[4619]: I0126 11:26:41.759135 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xhgd\" (UniqueName: \"kubernetes.io/projected/c99e3e59-22b4-4fe8-8fa6-69845f56ef45-kube-api-access-5xhgd\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qzt54\" (UID: \"c99e3e59-22b4-4fe8-8fa6-69845f56ef45\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qzt54" Jan 26 11:26:41 crc kubenswrapper[4619]: I0126 11:26:41.759318 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c99e3e59-22b4-4fe8-8fa6-69845f56ef45-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qzt54\" (UID: \"c99e3e59-22b4-4fe8-8fa6-69845f56ef45\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qzt54" Jan 26 11:26:41 crc kubenswrapper[4619]: I0126 11:26:41.759945 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c99e3e59-22b4-4fe8-8fa6-69845f56ef45-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qzt54\" (UID: \"c99e3e59-22b4-4fe8-8fa6-69845f56ef45\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qzt54" Jan 26 11:26:41 crc kubenswrapper[4619]: I0126 11:26:41.862183 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xhgd\" (UniqueName: \"kubernetes.io/projected/c99e3e59-22b4-4fe8-8fa6-69845f56ef45-kube-api-access-5xhgd\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qzt54\" (UID: \"c99e3e59-22b4-4fe8-8fa6-69845f56ef45\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qzt54" Jan 26 11:26:41 crc kubenswrapper[4619]: I0126 11:26:41.862270 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c99e3e59-22b4-4fe8-8fa6-69845f56ef45-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qzt54\" (UID: \"c99e3e59-22b4-4fe8-8fa6-69845f56ef45\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qzt54" Jan 26 11:26:41 crc kubenswrapper[4619]: I0126 11:26:41.862367 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c99e3e59-22b4-4fe8-8fa6-69845f56ef45-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qzt54\" (UID: \"c99e3e59-22b4-4fe8-8fa6-69845f56ef45\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qzt54" Jan 26 11:26:41 crc kubenswrapper[4619]: I0126 11:26:41.875347 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c99e3e59-22b4-4fe8-8fa6-69845f56ef45-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qzt54\" (UID: \"c99e3e59-22b4-4fe8-8fa6-69845f56ef45\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qzt54" Jan 26 11:26:41 crc kubenswrapper[4619]: I0126 11:26:41.875737 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c99e3e59-22b4-4fe8-8fa6-69845f56ef45-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qzt54\" (UID: \"c99e3e59-22b4-4fe8-8fa6-69845f56ef45\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qzt54" Jan 26 11:26:41 crc kubenswrapper[4619]: I0126 11:26:41.890517 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xhgd\" (UniqueName: \"kubernetes.io/projected/c99e3e59-22b4-4fe8-8fa6-69845f56ef45-kube-api-access-5xhgd\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qzt54\" (UID: \"c99e3e59-22b4-4fe8-8fa6-69845f56ef45\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qzt54" Jan 26 11:26:41 crc kubenswrapper[4619]: I0126 11:26:41.904236 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qzt54" Jan 26 11:26:42 crc kubenswrapper[4619]: I0126 11:26:42.403236 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qzt54"] Jan 26 11:26:42 crc kubenswrapper[4619]: I0126 11:26:42.411287 4619 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 11:26:42 crc kubenswrapper[4619]: I0126 11:26:42.500632 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qzt54" event={"ID":"c99e3e59-22b4-4fe8-8fa6-69845f56ef45","Type":"ContainerStarted","Data":"879d1e9729d1f470ad043ffda93e16d8fd02806582f267506669f6a4ba8fb880"} Jan 26 11:26:44 crc kubenswrapper[4619]: I0126 11:26:44.523682 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qzt54" event={"ID":"c99e3e59-22b4-4fe8-8fa6-69845f56ef45","Type":"ContainerStarted","Data":"582dcbd1524a699afbaa52335a1892dbd22bd87be991cf61c444644842d9562e"} Jan 26 11:26:44 crc kubenswrapper[4619]: I0126 11:26:44.543358 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qzt54" podStartSLOduration=2.456854164 podStartE2EDuration="3.543336815s" podCreationTimestamp="2026-01-26 11:26:41 +0000 UTC" firstStartedPulling="2026-01-26 11:26:42.411090399 +0000 UTC m=+1901.445131115" lastFinishedPulling="2026-01-26 11:26:43.49757305 +0000 UTC m=+1902.531613766" observedRunningTime="2026-01-26 11:26:44.541982108 +0000 UTC m=+1903.576022824" watchObservedRunningTime="2026-01-26 11:26:44.543336815 +0000 UTC m=+1903.577377531" Jan 26 11:26:49 crc kubenswrapper[4619]: I0126 11:26:49.562452 4619 generic.go:334] "Generic (PLEG): container finished" podID="c99e3e59-22b4-4fe8-8fa6-69845f56ef45" containerID="582dcbd1524a699afbaa52335a1892dbd22bd87be991cf61c444644842d9562e" exitCode=0 Jan 26 11:26:49 crc kubenswrapper[4619]: I0126 11:26:49.562546 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qzt54" event={"ID":"c99e3e59-22b4-4fe8-8fa6-69845f56ef45","Type":"ContainerDied","Data":"582dcbd1524a699afbaa52335a1892dbd22bd87be991cf61c444644842d9562e"} Jan 26 11:26:50 crc kubenswrapper[4619]: I0126 11:26:50.981572 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qzt54" Jan 26 11:26:51 crc kubenswrapper[4619]: I0126 11:26:51.155426 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xhgd\" (UniqueName: \"kubernetes.io/projected/c99e3e59-22b4-4fe8-8fa6-69845f56ef45-kube-api-access-5xhgd\") pod \"c99e3e59-22b4-4fe8-8fa6-69845f56ef45\" (UID: \"c99e3e59-22b4-4fe8-8fa6-69845f56ef45\") " Jan 26 11:26:51 crc kubenswrapper[4619]: I0126 11:26:51.155586 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c99e3e59-22b4-4fe8-8fa6-69845f56ef45-ssh-key-openstack-edpm-ipam\") pod \"c99e3e59-22b4-4fe8-8fa6-69845f56ef45\" (UID: \"c99e3e59-22b4-4fe8-8fa6-69845f56ef45\") " Jan 26 11:26:51 crc kubenswrapper[4619]: I0126 11:26:51.155755 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c99e3e59-22b4-4fe8-8fa6-69845f56ef45-inventory\") pod \"c99e3e59-22b4-4fe8-8fa6-69845f56ef45\" (UID: \"c99e3e59-22b4-4fe8-8fa6-69845f56ef45\") " Jan 26 11:26:51 crc kubenswrapper[4619]: I0126 11:26:51.161183 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c99e3e59-22b4-4fe8-8fa6-69845f56ef45-kube-api-access-5xhgd" (OuterVolumeSpecName: "kube-api-access-5xhgd") pod "c99e3e59-22b4-4fe8-8fa6-69845f56ef45" (UID: "c99e3e59-22b4-4fe8-8fa6-69845f56ef45"). InnerVolumeSpecName "kube-api-access-5xhgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:26:51 crc kubenswrapper[4619]: I0126 11:26:51.184701 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c99e3e59-22b4-4fe8-8fa6-69845f56ef45-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c99e3e59-22b4-4fe8-8fa6-69845f56ef45" (UID: "c99e3e59-22b4-4fe8-8fa6-69845f56ef45"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:26:51 crc kubenswrapper[4619]: I0126 11:26:51.185805 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c99e3e59-22b4-4fe8-8fa6-69845f56ef45-inventory" (OuterVolumeSpecName: "inventory") pod "c99e3e59-22b4-4fe8-8fa6-69845f56ef45" (UID: "c99e3e59-22b4-4fe8-8fa6-69845f56ef45"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:26:51 crc kubenswrapper[4619]: I0126 11:26:51.258046 4619 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c99e3e59-22b4-4fe8-8fa6-69845f56ef45-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 11:26:51 crc kubenswrapper[4619]: I0126 11:26:51.258091 4619 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c99e3e59-22b4-4fe8-8fa6-69845f56ef45-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 11:26:51 crc kubenswrapper[4619]: I0126 11:26:51.258102 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xhgd\" (UniqueName: \"kubernetes.io/projected/c99e3e59-22b4-4fe8-8fa6-69845f56ef45-kube-api-access-5xhgd\") on node \"crc\" DevicePath \"\"" Jan 26 11:26:51 crc kubenswrapper[4619]: I0126 11:26:51.580967 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qzt54" event={"ID":"c99e3e59-22b4-4fe8-8fa6-69845f56ef45","Type":"ContainerDied","Data":"879d1e9729d1f470ad043ffda93e16d8fd02806582f267506669f6a4ba8fb880"} Jan 26 11:26:51 crc kubenswrapper[4619]: I0126 11:26:51.581014 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="879d1e9729d1f470ad043ffda93e16d8fd02806582f267506669f6a4ba8fb880" Jan 26 11:26:51 crc kubenswrapper[4619]: I0126 11:26:51.581038 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qzt54" Jan 26 11:26:51 crc kubenswrapper[4619]: I0126 11:26:51.666571 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-c4v2v"] Jan 26 11:26:51 crc kubenswrapper[4619]: E0126 11:26:51.667063 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c99e3e59-22b4-4fe8-8fa6-69845f56ef45" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 11:26:51 crc kubenswrapper[4619]: I0126 11:26:51.667093 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="c99e3e59-22b4-4fe8-8fa6-69845f56ef45" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 11:26:51 crc kubenswrapper[4619]: I0126 11:26:51.667397 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="c99e3e59-22b4-4fe8-8fa6-69845f56ef45" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 11:26:51 crc kubenswrapper[4619]: I0126 11:26:51.668217 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c4v2v" Jan 26 11:26:51 crc kubenswrapper[4619]: I0126 11:26:51.671094 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fn84q" Jan 26 11:26:51 crc kubenswrapper[4619]: I0126 11:26:51.671234 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 11:26:51 crc kubenswrapper[4619]: I0126 11:26:51.673127 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 11:26:51 crc kubenswrapper[4619]: I0126 11:26:51.675659 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 11:26:51 crc kubenswrapper[4619]: I0126 11:26:51.693783 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-c4v2v"] Jan 26 11:26:51 crc kubenswrapper[4619]: I0126 11:26:51.870185 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0c6e648-dad5-48ab-8eb3-0e40a9225e9e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c4v2v\" (UID: \"e0c6e648-dad5-48ab-8eb3-0e40a9225e9e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c4v2v" Jan 26 11:26:51 crc kubenswrapper[4619]: I0126 11:26:51.870327 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0c6e648-dad5-48ab-8eb3-0e40a9225e9e-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c4v2v\" (UID: \"e0c6e648-dad5-48ab-8eb3-0e40a9225e9e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c4v2v" Jan 26 11:26:51 crc kubenswrapper[4619]: I0126 11:26:51.870357 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnd96\" (UniqueName: \"kubernetes.io/projected/e0c6e648-dad5-48ab-8eb3-0e40a9225e9e-kube-api-access-jnd96\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c4v2v\" (UID: \"e0c6e648-dad5-48ab-8eb3-0e40a9225e9e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c4v2v" Jan 26 11:26:51 crc kubenswrapper[4619]: I0126 11:26:51.972503 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0c6e648-dad5-48ab-8eb3-0e40a9225e9e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c4v2v\" (UID: \"e0c6e648-dad5-48ab-8eb3-0e40a9225e9e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c4v2v" Jan 26 11:26:51 crc kubenswrapper[4619]: I0126 11:26:51.972656 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0c6e648-dad5-48ab-8eb3-0e40a9225e9e-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c4v2v\" (UID: \"e0c6e648-dad5-48ab-8eb3-0e40a9225e9e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c4v2v" Jan 26 11:26:51 crc kubenswrapper[4619]: I0126 11:26:51.972689 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnd96\" (UniqueName: \"kubernetes.io/projected/e0c6e648-dad5-48ab-8eb3-0e40a9225e9e-kube-api-access-jnd96\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c4v2v\" (UID: \"e0c6e648-dad5-48ab-8eb3-0e40a9225e9e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c4v2v" Jan 26 11:26:51 crc kubenswrapper[4619]: I0126 11:26:51.976582 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0c6e648-dad5-48ab-8eb3-0e40a9225e9e-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c4v2v\" (UID: \"e0c6e648-dad5-48ab-8eb3-0e40a9225e9e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c4v2v" Jan 26 11:26:51 crc kubenswrapper[4619]: I0126 11:26:51.977529 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0c6e648-dad5-48ab-8eb3-0e40a9225e9e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c4v2v\" (UID: \"e0c6e648-dad5-48ab-8eb3-0e40a9225e9e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c4v2v" Jan 26 11:26:51 crc kubenswrapper[4619]: I0126 11:26:51.990245 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnd96\" (UniqueName: \"kubernetes.io/projected/e0c6e648-dad5-48ab-8eb3-0e40a9225e9e-kube-api-access-jnd96\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c4v2v\" (UID: \"e0c6e648-dad5-48ab-8eb3-0e40a9225e9e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c4v2v" Jan 26 11:26:51 crc kubenswrapper[4619]: I0126 11:26:51.995042 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c4v2v" Jan 26 11:26:52 crc kubenswrapper[4619]: I0126 11:26:52.512916 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-c4v2v"] Jan 26 11:26:52 crc kubenswrapper[4619]: I0126 11:26:52.589386 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c4v2v" event={"ID":"e0c6e648-dad5-48ab-8eb3-0e40a9225e9e","Type":"ContainerStarted","Data":"c57b5a1e91e26c2e565f21ee1d0edf5cdbcefc214f88494784b7f11d1f95bd57"} Jan 26 11:26:53 crc kubenswrapper[4619]: I0126 11:26:53.598890 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c4v2v" event={"ID":"e0c6e648-dad5-48ab-8eb3-0e40a9225e9e","Type":"ContainerStarted","Data":"81b0898fb57f6d92f88fcfc38d281e49b532792f9a7491f9cf85fe649e73dcc4"} Jan 26 11:26:53 crc kubenswrapper[4619]: I0126 11:26:53.622425 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c4v2v" podStartSLOduration=2.002256182 podStartE2EDuration="2.622408484s" podCreationTimestamp="2026-01-26 11:26:51 +0000 UTC" firstStartedPulling="2026-01-26 11:26:52.519698811 +0000 UTC m=+1911.553739527" lastFinishedPulling="2026-01-26 11:26:53.139851113 +0000 UTC m=+1912.173891829" observedRunningTime="2026-01-26 11:26:53.61419376 +0000 UTC m=+1912.648234476" watchObservedRunningTime="2026-01-26 11:26:53.622408484 +0000 UTC m=+1912.656449190" Jan 26 11:27:34 crc kubenswrapper[4619]: I0126 11:27:34.943082 4619 generic.go:334] "Generic (PLEG): container finished" podID="e0c6e648-dad5-48ab-8eb3-0e40a9225e9e" containerID="81b0898fb57f6d92f88fcfc38d281e49b532792f9a7491f9cf85fe649e73dcc4" exitCode=0 Jan 26 11:27:34 crc kubenswrapper[4619]: I0126 11:27:34.943164 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c4v2v" event={"ID":"e0c6e648-dad5-48ab-8eb3-0e40a9225e9e","Type":"ContainerDied","Data":"81b0898fb57f6d92f88fcfc38d281e49b532792f9a7491f9cf85fe649e73dcc4"} Jan 26 11:27:36 crc kubenswrapper[4619]: I0126 11:27:36.383306 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c4v2v" Jan 26 11:27:36 crc kubenswrapper[4619]: I0126 11:27:36.417753 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0c6e648-dad5-48ab-8eb3-0e40a9225e9e-ssh-key-openstack-edpm-ipam\") pod \"e0c6e648-dad5-48ab-8eb3-0e40a9225e9e\" (UID: \"e0c6e648-dad5-48ab-8eb3-0e40a9225e9e\") " Jan 26 11:27:36 crc kubenswrapper[4619]: I0126 11:27:36.417831 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0c6e648-dad5-48ab-8eb3-0e40a9225e9e-inventory\") pod \"e0c6e648-dad5-48ab-8eb3-0e40a9225e9e\" (UID: \"e0c6e648-dad5-48ab-8eb3-0e40a9225e9e\") " Jan 26 11:27:36 crc kubenswrapper[4619]: I0126 11:27:36.417866 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnd96\" (UniqueName: \"kubernetes.io/projected/e0c6e648-dad5-48ab-8eb3-0e40a9225e9e-kube-api-access-jnd96\") pod \"e0c6e648-dad5-48ab-8eb3-0e40a9225e9e\" (UID: \"e0c6e648-dad5-48ab-8eb3-0e40a9225e9e\") " Jan 26 11:27:36 crc kubenswrapper[4619]: I0126 11:27:36.429123 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0c6e648-dad5-48ab-8eb3-0e40a9225e9e-kube-api-access-jnd96" (OuterVolumeSpecName: "kube-api-access-jnd96") pod "e0c6e648-dad5-48ab-8eb3-0e40a9225e9e" (UID: "e0c6e648-dad5-48ab-8eb3-0e40a9225e9e"). InnerVolumeSpecName "kube-api-access-jnd96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:27:36 crc kubenswrapper[4619]: I0126 11:27:36.450740 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0c6e648-dad5-48ab-8eb3-0e40a9225e9e-inventory" (OuterVolumeSpecName: "inventory") pod "e0c6e648-dad5-48ab-8eb3-0e40a9225e9e" (UID: "e0c6e648-dad5-48ab-8eb3-0e40a9225e9e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:27:36 crc kubenswrapper[4619]: I0126 11:27:36.467832 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0c6e648-dad5-48ab-8eb3-0e40a9225e9e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e0c6e648-dad5-48ab-8eb3-0e40a9225e9e" (UID: "e0c6e648-dad5-48ab-8eb3-0e40a9225e9e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:27:36 crc kubenswrapper[4619]: I0126 11:27:36.520986 4619 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0c6e648-dad5-48ab-8eb3-0e40a9225e9e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 11:27:36 crc kubenswrapper[4619]: I0126 11:27:36.521025 4619 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0c6e648-dad5-48ab-8eb3-0e40a9225e9e-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 11:27:36 crc kubenswrapper[4619]: I0126 11:27:36.521037 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnd96\" (UniqueName: \"kubernetes.io/projected/e0c6e648-dad5-48ab-8eb3-0e40a9225e9e-kube-api-access-jnd96\") on node \"crc\" DevicePath \"\"" Jan 26 11:27:36 crc kubenswrapper[4619]: I0126 11:27:36.958689 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c4v2v" event={"ID":"e0c6e648-dad5-48ab-8eb3-0e40a9225e9e","Type":"ContainerDied","Data":"c57b5a1e91e26c2e565f21ee1d0edf5cdbcefc214f88494784b7f11d1f95bd57"} Jan 26 11:27:36 crc kubenswrapper[4619]: I0126 11:27:36.958732 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c57b5a1e91e26c2e565f21ee1d0edf5cdbcefc214f88494784b7f11d1f95bd57" Jan 26 11:27:36 crc kubenswrapper[4619]: I0126 11:27:36.958736 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c4v2v" Jan 26 11:27:37 crc kubenswrapper[4619]: I0126 11:27:37.048824 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-858f8"] Jan 26 11:27:37 crc kubenswrapper[4619]: E0126 11:27:37.049520 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0c6e648-dad5-48ab-8eb3-0e40a9225e9e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 11:27:37 crc kubenswrapper[4619]: I0126 11:27:37.049543 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0c6e648-dad5-48ab-8eb3-0e40a9225e9e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 11:27:37 crc kubenswrapper[4619]: I0126 11:27:37.049773 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0c6e648-dad5-48ab-8eb3-0e40a9225e9e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 11:27:37 crc kubenswrapper[4619]: I0126 11:27:37.050389 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-858f8" Jan 26 11:27:37 crc kubenswrapper[4619]: I0126 11:27:37.052241 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 11:27:37 crc kubenswrapper[4619]: I0126 11:27:37.052770 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fn84q" Jan 26 11:27:37 crc kubenswrapper[4619]: I0126 11:27:37.053070 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 11:27:37 crc kubenswrapper[4619]: I0126 11:27:37.053407 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 11:27:37 crc kubenswrapper[4619]: I0126 11:27:37.077137 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-858f8"] Jan 26 11:27:37 crc kubenswrapper[4619]: I0126 11:27:37.130570 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51653ef8-78e6-4e44-9391-e815c9d092bf-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-858f8\" (UID: \"51653ef8-78e6-4e44-9391-e815c9d092bf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-858f8" Jan 26 11:27:37 crc kubenswrapper[4619]: I0126 11:27:37.130630 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51653ef8-78e6-4e44-9391-e815c9d092bf-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-858f8\" (UID: \"51653ef8-78e6-4e44-9391-e815c9d092bf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-858f8" Jan 26 11:27:37 crc kubenswrapper[4619]: I0126 11:27:37.130659 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntmrh\" (UniqueName: \"kubernetes.io/projected/51653ef8-78e6-4e44-9391-e815c9d092bf-kube-api-access-ntmrh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-858f8\" (UID: \"51653ef8-78e6-4e44-9391-e815c9d092bf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-858f8" Jan 26 11:27:37 crc kubenswrapper[4619]: I0126 11:27:37.232161 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntmrh\" (UniqueName: \"kubernetes.io/projected/51653ef8-78e6-4e44-9391-e815c9d092bf-kube-api-access-ntmrh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-858f8\" (UID: \"51653ef8-78e6-4e44-9391-e815c9d092bf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-858f8" Jan 26 11:27:37 crc kubenswrapper[4619]: I0126 11:27:37.232336 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51653ef8-78e6-4e44-9391-e815c9d092bf-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-858f8\" (UID: \"51653ef8-78e6-4e44-9391-e815c9d092bf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-858f8" Jan 26 11:27:37 crc kubenswrapper[4619]: I0126 11:27:37.232368 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51653ef8-78e6-4e44-9391-e815c9d092bf-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-858f8\" (UID: \"51653ef8-78e6-4e44-9391-e815c9d092bf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-858f8" Jan 26 11:27:37 crc kubenswrapper[4619]: I0126 11:27:37.240307 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51653ef8-78e6-4e44-9391-e815c9d092bf-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-858f8\" (UID: \"51653ef8-78e6-4e44-9391-e815c9d092bf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-858f8" Jan 26 11:27:37 crc kubenswrapper[4619]: I0126 11:27:37.250202 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51653ef8-78e6-4e44-9391-e815c9d092bf-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-858f8\" (UID: \"51653ef8-78e6-4e44-9391-e815c9d092bf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-858f8" Jan 26 11:27:37 crc kubenswrapper[4619]: I0126 11:27:37.250395 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntmrh\" (UniqueName: \"kubernetes.io/projected/51653ef8-78e6-4e44-9391-e815c9d092bf-kube-api-access-ntmrh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-858f8\" (UID: \"51653ef8-78e6-4e44-9391-e815c9d092bf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-858f8" Jan 26 11:27:37 crc kubenswrapper[4619]: I0126 11:27:37.371406 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-858f8" Jan 26 11:27:37 crc kubenswrapper[4619]: I0126 11:27:37.871360 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-858f8"] Jan 26 11:27:37 crc kubenswrapper[4619]: I0126 11:27:37.967475 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-858f8" event={"ID":"51653ef8-78e6-4e44-9391-e815c9d092bf","Type":"ContainerStarted","Data":"4c1965335908f54aac78ee3176ec5f1cf20258fc419a7e71fcc8c8f178776312"} Jan 26 11:27:38 crc kubenswrapper[4619]: I0126 11:27:38.980075 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-858f8" event={"ID":"51653ef8-78e6-4e44-9391-e815c9d092bf","Type":"ContainerStarted","Data":"9d43fce78e6734537358d513dc8ea23cf119b14519b1d0d876b407872a34efbd"} Jan 26 11:28:14 crc kubenswrapper[4619]: I0126 11:28:14.234797 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:28:14 crc kubenswrapper[4619]: I0126 11:28:14.235416 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:28:37 crc kubenswrapper[4619]: I0126 11:28:37.653569 4619 generic.go:334] "Generic (PLEG): container finished" podID="51653ef8-78e6-4e44-9391-e815c9d092bf" containerID="9d43fce78e6734537358d513dc8ea23cf119b14519b1d0d876b407872a34efbd" exitCode=0 Jan 26 11:28:37 crc kubenswrapper[4619]: I0126 11:28:37.653701 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-858f8" event={"ID":"51653ef8-78e6-4e44-9391-e815c9d092bf","Type":"ContainerDied","Data":"9d43fce78e6734537358d513dc8ea23cf119b14519b1d0d876b407872a34efbd"} Jan 26 11:28:39 crc kubenswrapper[4619]: I0126 11:28:39.027607 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-858f8" Jan 26 11:28:39 crc kubenswrapper[4619]: I0126 11:28:39.135081 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntmrh\" (UniqueName: \"kubernetes.io/projected/51653ef8-78e6-4e44-9391-e815c9d092bf-kube-api-access-ntmrh\") pod \"51653ef8-78e6-4e44-9391-e815c9d092bf\" (UID: \"51653ef8-78e6-4e44-9391-e815c9d092bf\") " Jan 26 11:28:39 crc kubenswrapper[4619]: I0126 11:28:39.135130 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51653ef8-78e6-4e44-9391-e815c9d092bf-ssh-key-openstack-edpm-ipam\") pod \"51653ef8-78e6-4e44-9391-e815c9d092bf\" (UID: \"51653ef8-78e6-4e44-9391-e815c9d092bf\") " Jan 26 11:28:39 crc kubenswrapper[4619]: I0126 11:28:39.135351 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51653ef8-78e6-4e44-9391-e815c9d092bf-inventory\") pod \"51653ef8-78e6-4e44-9391-e815c9d092bf\" (UID: \"51653ef8-78e6-4e44-9391-e815c9d092bf\") " Jan 26 11:28:39 crc kubenswrapper[4619]: I0126 11:28:39.140467 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51653ef8-78e6-4e44-9391-e815c9d092bf-kube-api-access-ntmrh" (OuterVolumeSpecName: "kube-api-access-ntmrh") pod "51653ef8-78e6-4e44-9391-e815c9d092bf" (UID: "51653ef8-78e6-4e44-9391-e815c9d092bf"). InnerVolumeSpecName "kube-api-access-ntmrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:28:39 crc kubenswrapper[4619]: I0126 11:28:39.165436 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51653ef8-78e6-4e44-9391-e815c9d092bf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "51653ef8-78e6-4e44-9391-e815c9d092bf" (UID: "51653ef8-78e6-4e44-9391-e815c9d092bf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:28:39 crc kubenswrapper[4619]: I0126 11:28:39.169088 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51653ef8-78e6-4e44-9391-e815c9d092bf-inventory" (OuterVolumeSpecName: "inventory") pod "51653ef8-78e6-4e44-9391-e815c9d092bf" (UID: "51653ef8-78e6-4e44-9391-e815c9d092bf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:28:39 crc kubenswrapper[4619]: I0126 11:28:39.237869 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntmrh\" (UniqueName: \"kubernetes.io/projected/51653ef8-78e6-4e44-9391-e815c9d092bf-kube-api-access-ntmrh\") on node \"crc\" DevicePath \"\"" Jan 26 11:28:39 crc kubenswrapper[4619]: I0126 11:28:39.237908 4619 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51653ef8-78e6-4e44-9391-e815c9d092bf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 11:28:39 crc kubenswrapper[4619]: I0126 11:28:39.237920 4619 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51653ef8-78e6-4e44-9391-e815c9d092bf-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 11:28:39 crc kubenswrapper[4619]: I0126 11:28:39.669587 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-858f8" event={"ID":"51653ef8-78e6-4e44-9391-e815c9d092bf","Type":"ContainerDied","Data":"4c1965335908f54aac78ee3176ec5f1cf20258fc419a7e71fcc8c8f178776312"} Jan 26 11:28:39 crc kubenswrapper[4619]: I0126 11:28:39.669647 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c1965335908f54aac78ee3176ec5f1cf20258fc419a7e71fcc8c8f178776312" Jan 26 11:28:39 crc kubenswrapper[4619]: I0126 11:28:39.669657 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-858f8" Jan 26 11:28:39 crc kubenswrapper[4619]: I0126 11:28:39.759861 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qs6jh"] Jan 26 11:28:39 crc kubenswrapper[4619]: E0126 11:28:39.760294 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51653ef8-78e6-4e44-9391-e815c9d092bf" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 11:28:39 crc kubenswrapper[4619]: I0126 11:28:39.760323 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="51653ef8-78e6-4e44-9391-e815c9d092bf" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 11:28:39 crc kubenswrapper[4619]: I0126 11:28:39.760559 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="51653ef8-78e6-4e44-9391-e815c9d092bf" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 11:28:39 crc kubenswrapper[4619]: I0126 11:28:39.761304 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qs6jh" Jan 26 11:28:39 crc kubenswrapper[4619]: I0126 11:28:39.763757 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 11:28:39 crc kubenswrapper[4619]: I0126 11:28:39.763801 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 11:28:39 crc kubenswrapper[4619]: I0126 11:28:39.767418 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fn84q" Jan 26 11:28:39 crc kubenswrapper[4619]: I0126 11:28:39.767531 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 11:28:39 crc kubenswrapper[4619]: I0126 11:28:39.780027 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qs6jh"] Jan 26 11:28:39 crc kubenswrapper[4619]: I0126 11:28:39.850189 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5fgr\" (UniqueName: \"kubernetes.io/projected/7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3-kube-api-access-k5fgr\") pod \"ssh-known-hosts-edpm-deployment-qs6jh\" (UID: \"7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3\") " pod="openstack/ssh-known-hosts-edpm-deployment-qs6jh" Jan 26 11:28:39 crc kubenswrapper[4619]: I0126 11:28:39.850470 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qs6jh\" (UID: \"7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3\") " pod="openstack/ssh-known-hosts-edpm-deployment-qs6jh" Jan 26 11:28:39 crc kubenswrapper[4619]: I0126 11:28:39.850512 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qs6jh\" (UID: \"7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3\") " pod="openstack/ssh-known-hosts-edpm-deployment-qs6jh" Jan 26 11:28:39 crc kubenswrapper[4619]: I0126 11:28:39.952472 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qs6jh\" (UID: \"7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3\") " pod="openstack/ssh-known-hosts-edpm-deployment-qs6jh" Jan 26 11:28:39 crc kubenswrapper[4619]: I0126 11:28:39.952770 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5fgr\" (UniqueName: \"kubernetes.io/projected/7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3-kube-api-access-k5fgr\") pod \"ssh-known-hosts-edpm-deployment-qs6jh\" (UID: \"7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3\") " pod="openstack/ssh-known-hosts-edpm-deployment-qs6jh" Jan 26 11:28:39 crc kubenswrapper[4619]: I0126 11:28:39.952831 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qs6jh\" (UID: \"7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3\") " pod="openstack/ssh-known-hosts-edpm-deployment-qs6jh" Jan 26 11:28:39 crc kubenswrapper[4619]: I0126 11:28:39.957698 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qs6jh\" (UID: \"7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3\") " pod="openstack/ssh-known-hosts-edpm-deployment-qs6jh" Jan 26 11:28:39 crc kubenswrapper[4619]: I0126 11:28:39.958531 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qs6jh\" (UID: \"7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3\") " pod="openstack/ssh-known-hosts-edpm-deployment-qs6jh" Jan 26 11:28:39 crc kubenswrapper[4619]: I0126 11:28:39.969559 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5fgr\" (UniqueName: \"kubernetes.io/projected/7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3-kube-api-access-k5fgr\") pod \"ssh-known-hosts-edpm-deployment-qs6jh\" (UID: \"7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3\") " pod="openstack/ssh-known-hosts-edpm-deployment-qs6jh" Jan 26 11:28:40 crc kubenswrapper[4619]: I0126 11:28:40.081329 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qs6jh" Jan 26 11:28:40 crc kubenswrapper[4619]: I0126 11:28:40.619239 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qs6jh"] Jan 26 11:28:40 crc kubenswrapper[4619]: I0126 11:28:40.677975 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qs6jh" event={"ID":"7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3","Type":"ContainerStarted","Data":"a9d4d47ba2e0b670877d6840ea0e6298c85265d58a64b8910bcadf3836c16195"} Jan 26 11:28:41 crc kubenswrapper[4619]: I0126 11:28:41.696188 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qs6jh" event={"ID":"7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3","Type":"ContainerStarted","Data":"a10d330e8598ef0e64e270bf3db02a57dd718b7281b50b19ecedbb6fd0c705f0"} Jan 26 11:28:41 crc kubenswrapper[4619]: I0126 11:28:41.723297 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-qs6jh" podStartSLOduration=2.0916119 podStartE2EDuration="2.723271205s" podCreationTimestamp="2026-01-26 11:28:39 +0000 UTC" firstStartedPulling="2026-01-26 11:28:40.634413638 +0000 UTC m=+2019.668454354" lastFinishedPulling="2026-01-26 11:28:41.266072943 +0000 UTC m=+2020.300113659" observedRunningTime="2026-01-26 11:28:41.718260758 +0000 UTC m=+2020.752301474" watchObservedRunningTime="2026-01-26 11:28:41.723271205 +0000 UTC m=+2020.757311931" Jan 26 11:28:44 crc kubenswrapper[4619]: I0126 11:28:44.234018 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:28:44 crc kubenswrapper[4619]: I0126 11:28:44.234357 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:28:48 crc kubenswrapper[4619]: I0126 11:28:48.761847 4619 generic.go:334] "Generic (PLEG): container finished" podID="7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3" containerID="a10d330e8598ef0e64e270bf3db02a57dd718b7281b50b19ecedbb6fd0c705f0" exitCode=0 Jan 26 11:28:48 crc kubenswrapper[4619]: I0126 11:28:48.761913 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qs6jh" event={"ID":"7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3","Type":"ContainerDied","Data":"a10d330e8598ef0e64e270bf3db02a57dd718b7281b50b19ecedbb6fd0c705f0"} Jan 26 11:28:50 crc kubenswrapper[4619]: I0126 11:28:50.169887 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qs6jh" Jan 26 11:28:50 crc kubenswrapper[4619]: I0126 11:28:50.258324 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3-ssh-key-openstack-edpm-ipam\") pod \"7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3\" (UID: \"7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3\") " Jan 26 11:28:50 crc kubenswrapper[4619]: I0126 11:28:50.258563 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3-inventory-0\") pod \"7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3\" (UID: \"7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3\") " Jan 26 11:28:50 crc kubenswrapper[4619]: I0126 11:28:50.258720 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5fgr\" (UniqueName: \"kubernetes.io/projected/7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3-kube-api-access-k5fgr\") pod \"7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3\" (UID: \"7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3\") " Jan 26 11:28:50 crc kubenswrapper[4619]: I0126 11:28:50.263699 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3-kube-api-access-k5fgr" (OuterVolumeSpecName: "kube-api-access-k5fgr") pod "7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3" (UID: "7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3"). InnerVolumeSpecName "kube-api-access-k5fgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:28:50 crc kubenswrapper[4619]: I0126 11:28:50.290696 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3" (UID: "7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:28:50 crc kubenswrapper[4619]: I0126 11:28:50.292511 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3" (UID: "7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:28:50 crc kubenswrapper[4619]: I0126 11:28:50.361793 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5fgr\" (UniqueName: \"kubernetes.io/projected/7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3-kube-api-access-k5fgr\") on node \"crc\" DevicePath \"\"" Jan 26 11:28:50 crc kubenswrapper[4619]: I0126 11:28:50.362083 4619 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 11:28:50 crc kubenswrapper[4619]: I0126 11:28:50.362096 4619 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 26 11:28:50 crc kubenswrapper[4619]: I0126 11:28:50.781451 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qs6jh" event={"ID":"7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3","Type":"ContainerDied","Data":"a9d4d47ba2e0b670877d6840ea0e6298c85265d58a64b8910bcadf3836c16195"} Jan 26 11:28:50 crc kubenswrapper[4619]: I0126 11:28:50.781493 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9d4d47ba2e0b670877d6840ea0e6298c85265d58a64b8910bcadf3836c16195" Jan 26 11:28:50 crc kubenswrapper[4619]: I0126 11:28:50.781528 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qs6jh" Jan 26 11:28:50 crc kubenswrapper[4619]: I0126 11:28:50.935430 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-xz9vt"] Jan 26 11:28:50 crc kubenswrapper[4619]: E0126 11:28:50.935921 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3" containerName="ssh-known-hosts-edpm-deployment" Jan 26 11:28:50 crc kubenswrapper[4619]: I0126 11:28:50.935942 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3" containerName="ssh-known-hosts-edpm-deployment" Jan 26 11:28:50 crc kubenswrapper[4619]: I0126 11:28:50.936151 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3" containerName="ssh-known-hosts-edpm-deployment" Jan 26 11:28:50 crc kubenswrapper[4619]: I0126 11:28:50.936807 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xz9vt" Jan 26 11:28:50 crc kubenswrapper[4619]: I0126 11:28:50.943481 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fn84q" Jan 26 11:28:50 crc kubenswrapper[4619]: I0126 11:28:50.943506 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 11:28:50 crc kubenswrapper[4619]: I0126 11:28:50.943772 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 11:28:50 crc kubenswrapper[4619]: I0126 11:28:50.943903 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 11:28:50 crc kubenswrapper[4619]: I0126 11:28:50.961953 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-xz9vt"] Jan 26 11:28:50 crc kubenswrapper[4619]: I0126 11:28:50.973889 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/52d1c976-907c-4749-a9d2-e4a518578cbc-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xz9vt\" (UID: \"52d1c976-907c-4749-a9d2-e4a518578cbc\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xz9vt" Jan 26 11:28:50 crc kubenswrapper[4619]: I0126 11:28:50.973960 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f5lr\" (UniqueName: \"kubernetes.io/projected/52d1c976-907c-4749-a9d2-e4a518578cbc-kube-api-access-7f5lr\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xz9vt\" (UID: \"52d1c976-907c-4749-a9d2-e4a518578cbc\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xz9vt" Jan 26 11:28:50 crc kubenswrapper[4619]: I0126 11:28:50.973997 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52d1c976-907c-4749-a9d2-e4a518578cbc-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xz9vt\" (UID: \"52d1c976-907c-4749-a9d2-e4a518578cbc\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xz9vt" Jan 26 11:28:51 crc kubenswrapper[4619]: I0126 11:28:51.092053 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52d1c976-907c-4749-a9d2-e4a518578cbc-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xz9vt\" (UID: \"52d1c976-907c-4749-a9d2-e4a518578cbc\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xz9vt" Jan 26 11:28:51 crc kubenswrapper[4619]: I0126 11:28:51.092321 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/52d1c976-907c-4749-a9d2-e4a518578cbc-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xz9vt\" (UID: \"52d1c976-907c-4749-a9d2-e4a518578cbc\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xz9vt" Jan 26 11:28:51 crc kubenswrapper[4619]: I0126 11:28:51.092373 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f5lr\" (UniqueName: \"kubernetes.io/projected/52d1c976-907c-4749-a9d2-e4a518578cbc-kube-api-access-7f5lr\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xz9vt\" (UID: \"52d1c976-907c-4749-a9d2-e4a518578cbc\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xz9vt" Jan 26 11:28:51 crc kubenswrapper[4619]: I0126 11:28:51.098247 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52d1c976-907c-4749-a9d2-e4a518578cbc-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xz9vt\" (UID: \"52d1c976-907c-4749-a9d2-e4a518578cbc\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xz9vt" Jan 26 11:28:51 crc kubenswrapper[4619]: I0126 11:28:51.102199 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/52d1c976-907c-4749-a9d2-e4a518578cbc-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xz9vt\" (UID: \"52d1c976-907c-4749-a9d2-e4a518578cbc\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xz9vt" Jan 26 11:28:51 crc kubenswrapper[4619]: I0126 11:28:51.113853 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f5lr\" (UniqueName: \"kubernetes.io/projected/52d1c976-907c-4749-a9d2-e4a518578cbc-kube-api-access-7f5lr\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xz9vt\" (UID: \"52d1c976-907c-4749-a9d2-e4a518578cbc\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xz9vt" Jan 26 11:28:51 crc kubenswrapper[4619]: I0126 11:28:51.273110 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xz9vt" Jan 26 11:28:51 crc kubenswrapper[4619]: I0126 11:28:51.799569 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-xz9vt"] Jan 26 11:28:52 crc kubenswrapper[4619]: I0126 11:28:52.797993 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xz9vt" event={"ID":"52d1c976-907c-4749-a9d2-e4a518578cbc","Type":"ContainerStarted","Data":"12eaa5c7077bc1ee43fb170af3d5f0b12cb97fffbe4778217c173d865146e6d8"} Jan 26 11:28:52 crc kubenswrapper[4619]: I0126 11:28:52.798539 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xz9vt" event={"ID":"52d1c976-907c-4749-a9d2-e4a518578cbc","Type":"ContainerStarted","Data":"57bec52ff4545a2e8d401bb9bb34c3d0d549fab11808e9d24ebb7fba8e7da848"} Jan 26 11:28:52 crc kubenswrapper[4619]: I0126 11:28:52.815412 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xz9vt" podStartSLOduration=2.365448615 podStartE2EDuration="2.815386538s" podCreationTimestamp="2026-01-26 11:28:50 +0000 UTC" firstStartedPulling="2026-01-26 11:28:51.805843229 +0000 UTC m=+2030.839883945" lastFinishedPulling="2026-01-26 11:28:52.255781142 +0000 UTC m=+2031.289821868" observedRunningTime="2026-01-26 11:28:52.810673399 +0000 UTC m=+2031.844714135" watchObservedRunningTime="2026-01-26 11:28:52.815386538 +0000 UTC m=+2031.849427274" Jan 26 11:29:01 crc kubenswrapper[4619]: I0126 11:29:01.866448 4619 generic.go:334] "Generic (PLEG): container finished" podID="52d1c976-907c-4749-a9d2-e4a518578cbc" containerID="12eaa5c7077bc1ee43fb170af3d5f0b12cb97fffbe4778217c173d865146e6d8" exitCode=0 Jan 26 11:29:01 crc kubenswrapper[4619]: I0126 11:29:01.866539 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xz9vt" event={"ID":"52d1c976-907c-4749-a9d2-e4a518578cbc","Type":"ContainerDied","Data":"12eaa5c7077bc1ee43fb170af3d5f0b12cb97fffbe4778217c173d865146e6d8"} Jan 26 11:29:02 crc kubenswrapper[4619]: E0126 11:29:02.150940 4619 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52d1c976_907c_4749_a9d2_e4a518578cbc.slice/crio-conmon-12eaa5c7077bc1ee43fb170af3d5f0b12cb97fffbe4778217c173d865146e6d8.scope\": RecentStats: unable to find data in memory cache]" Jan 26 11:29:03 crc kubenswrapper[4619]: I0126 11:29:03.278625 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xz9vt" Jan 26 11:29:03 crc kubenswrapper[4619]: I0126 11:29:03.343249 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52d1c976-907c-4749-a9d2-e4a518578cbc-inventory\") pod \"52d1c976-907c-4749-a9d2-e4a518578cbc\" (UID: \"52d1c976-907c-4749-a9d2-e4a518578cbc\") " Jan 26 11:29:03 crc kubenswrapper[4619]: I0126 11:29:03.343768 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7f5lr\" (UniqueName: \"kubernetes.io/projected/52d1c976-907c-4749-a9d2-e4a518578cbc-kube-api-access-7f5lr\") pod \"52d1c976-907c-4749-a9d2-e4a518578cbc\" (UID: \"52d1c976-907c-4749-a9d2-e4a518578cbc\") " Jan 26 11:29:03 crc kubenswrapper[4619]: I0126 11:29:03.343851 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/52d1c976-907c-4749-a9d2-e4a518578cbc-ssh-key-openstack-edpm-ipam\") pod \"52d1c976-907c-4749-a9d2-e4a518578cbc\" (UID: \"52d1c976-907c-4749-a9d2-e4a518578cbc\") " Jan 26 11:29:03 crc kubenswrapper[4619]: I0126 11:29:03.349438 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52d1c976-907c-4749-a9d2-e4a518578cbc-kube-api-access-7f5lr" (OuterVolumeSpecName: "kube-api-access-7f5lr") pod "52d1c976-907c-4749-a9d2-e4a518578cbc" (UID: "52d1c976-907c-4749-a9d2-e4a518578cbc"). InnerVolumeSpecName "kube-api-access-7f5lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:29:03 crc kubenswrapper[4619]: I0126 11:29:03.370830 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d1c976-907c-4749-a9d2-e4a518578cbc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "52d1c976-907c-4749-a9d2-e4a518578cbc" (UID: "52d1c976-907c-4749-a9d2-e4a518578cbc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:29:03 crc kubenswrapper[4619]: I0126 11:29:03.372840 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d1c976-907c-4749-a9d2-e4a518578cbc-inventory" (OuterVolumeSpecName: "inventory") pod "52d1c976-907c-4749-a9d2-e4a518578cbc" (UID: "52d1c976-907c-4749-a9d2-e4a518578cbc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:29:03 crc kubenswrapper[4619]: I0126 11:29:03.446159 4619 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52d1c976-907c-4749-a9d2-e4a518578cbc-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 11:29:03 crc kubenswrapper[4619]: I0126 11:29:03.446199 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7f5lr\" (UniqueName: \"kubernetes.io/projected/52d1c976-907c-4749-a9d2-e4a518578cbc-kube-api-access-7f5lr\") on node \"crc\" DevicePath \"\"" Jan 26 11:29:03 crc kubenswrapper[4619]: I0126 11:29:03.446211 4619 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/52d1c976-907c-4749-a9d2-e4a518578cbc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 11:29:03 crc kubenswrapper[4619]: I0126 11:29:03.884229 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xz9vt" event={"ID":"52d1c976-907c-4749-a9d2-e4a518578cbc","Type":"ContainerDied","Data":"57bec52ff4545a2e8d401bb9bb34c3d0d549fab11808e9d24ebb7fba8e7da848"} Jan 26 11:29:03 crc kubenswrapper[4619]: I0126 11:29:03.884289 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57bec52ff4545a2e8d401bb9bb34c3d0d549fab11808e9d24ebb7fba8e7da848" Jan 26 11:29:03 crc kubenswrapper[4619]: I0126 11:29:03.884371 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xz9vt" Jan 26 11:29:03 crc kubenswrapper[4619]: I0126 11:29:03.991029 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v"] Jan 26 11:29:03 crc kubenswrapper[4619]: E0126 11:29:03.991554 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d1c976-907c-4749-a9d2-e4a518578cbc" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 11:29:03 crc kubenswrapper[4619]: I0126 11:29:03.991578 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d1c976-907c-4749-a9d2-e4a518578cbc" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 11:29:03 crc kubenswrapper[4619]: I0126 11:29:03.991834 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="52d1c976-907c-4749-a9d2-e4a518578cbc" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 11:29:03 crc kubenswrapper[4619]: I0126 11:29:03.992574 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v" Jan 26 11:29:03 crc kubenswrapper[4619]: I0126 11:29:03.998720 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 11:29:04 crc kubenswrapper[4619]: I0126 11:29:03.998993 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fn84q" Jan 26 11:29:04 crc kubenswrapper[4619]: I0126 11:29:04.000250 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 11:29:04 crc kubenswrapper[4619]: I0126 11:29:04.002337 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 11:29:04 crc kubenswrapper[4619]: I0126 11:29:04.015464 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v"] Jan 26 11:29:04 crc kubenswrapper[4619]: I0126 11:29:04.157942 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3098c4ac-7ae9-4af9-a23f-969054a718fe-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v\" (UID: \"3098c4ac-7ae9-4af9-a23f-969054a718fe\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v" Jan 26 11:29:04 crc kubenswrapper[4619]: I0126 11:29:04.158309 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th22n\" (UniqueName: \"kubernetes.io/projected/3098c4ac-7ae9-4af9-a23f-969054a718fe-kube-api-access-th22n\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v\" (UID: \"3098c4ac-7ae9-4af9-a23f-969054a718fe\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v" Jan 26 11:29:04 crc kubenswrapper[4619]: I0126 11:29:04.158347 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3098c4ac-7ae9-4af9-a23f-969054a718fe-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v\" (UID: \"3098c4ac-7ae9-4af9-a23f-969054a718fe\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v" Jan 26 11:29:04 crc kubenswrapper[4619]: I0126 11:29:04.259387 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th22n\" (UniqueName: \"kubernetes.io/projected/3098c4ac-7ae9-4af9-a23f-969054a718fe-kube-api-access-th22n\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v\" (UID: \"3098c4ac-7ae9-4af9-a23f-969054a718fe\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v" Jan 26 11:29:04 crc kubenswrapper[4619]: I0126 11:29:04.259643 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3098c4ac-7ae9-4af9-a23f-969054a718fe-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v\" (UID: \"3098c4ac-7ae9-4af9-a23f-969054a718fe\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v" Jan 26 11:29:04 crc kubenswrapper[4619]: I0126 11:29:04.259822 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3098c4ac-7ae9-4af9-a23f-969054a718fe-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v\" (UID: \"3098c4ac-7ae9-4af9-a23f-969054a718fe\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v" Jan 26 11:29:04 crc kubenswrapper[4619]: I0126 11:29:04.264212 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3098c4ac-7ae9-4af9-a23f-969054a718fe-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v\" (UID: \"3098c4ac-7ae9-4af9-a23f-969054a718fe\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v" Jan 26 11:29:04 crc kubenswrapper[4619]: I0126 11:29:04.265745 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3098c4ac-7ae9-4af9-a23f-969054a718fe-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v\" (UID: \"3098c4ac-7ae9-4af9-a23f-969054a718fe\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v" Jan 26 11:29:04 crc kubenswrapper[4619]: I0126 11:29:04.275637 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th22n\" (UniqueName: \"kubernetes.io/projected/3098c4ac-7ae9-4af9-a23f-969054a718fe-kube-api-access-th22n\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v\" (UID: \"3098c4ac-7ae9-4af9-a23f-969054a718fe\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v" Jan 26 11:29:04 crc kubenswrapper[4619]: I0126 11:29:04.322953 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v" Jan 26 11:29:04 crc kubenswrapper[4619]: I0126 11:29:04.854536 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v"] Jan 26 11:29:04 crc kubenswrapper[4619]: I0126 11:29:04.892043 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v" event={"ID":"3098c4ac-7ae9-4af9-a23f-969054a718fe","Type":"ContainerStarted","Data":"097a3345715ed0692bec011e634d316156da08dbce84c8619d87f3f7e2942fbb"} Jan 26 11:29:05 crc kubenswrapper[4619]: I0126 11:29:05.902831 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v" event={"ID":"3098c4ac-7ae9-4af9-a23f-969054a718fe","Type":"ContainerStarted","Data":"20cc999b5419dd93810c9cda4e535efbfb881e7b4efa7be6a0c4c65819ce7e07"} Jan 26 11:29:05 crc kubenswrapper[4619]: I0126 11:29:05.929443 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v" podStartSLOduration=2.535100035 podStartE2EDuration="2.929427856s" podCreationTimestamp="2026-01-26 11:29:03 +0000 UTC" firstStartedPulling="2026-01-26 11:29:04.870797972 +0000 UTC m=+2043.904838688" lastFinishedPulling="2026-01-26 11:29:05.265125793 +0000 UTC m=+2044.299166509" observedRunningTime="2026-01-26 11:29:05.919169197 +0000 UTC m=+2044.953209913" watchObservedRunningTime="2026-01-26 11:29:05.929427856 +0000 UTC m=+2044.963468572" Jan 26 11:29:12 crc kubenswrapper[4619]: E0126 11:29:12.424991 4619 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52d1c976_907c_4749_a9d2_e4a518578cbc.slice/crio-conmon-12eaa5c7077bc1ee43fb170af3d5f0b12cb97fffbe4778217c173d865146e6d8.scope\": RecentStats: unable to find data in memory cache]" Jan 26 11:29:14 crc kubenswrapper[4619]: I0126 11:29:14.234545 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:29:14 crc kubenswrapper[4619]: I0126 11:29:14.234951 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:29:14 crc kubenswrapper[4619]: I0126 11:29:14.235002 4619 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" Jan 26 11:29:14 crc kubenswrapper[4619]: I0126 11:29:14.235872 4619 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"48466c6ecf3b810bf1e304c5501f651d0eed5c6b8b657b951b311faf79acbb89"} pod="openshift-machine-config-operator/machine-config-daemon-28hd4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 11:29:14 crc kubenswrapper[4619]: I0126 11:29:14.235930 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" containerID="cri-o://48466c6ecf3b810bf1e304c5501f651d0eed5c6b8b657b951b311faf79acbb89" gracePeriod=600 Jan 26 11:29:14 crc kubenswrapper[4619]: I0126 11:29:14.991791 4619 generic.go:334] "Generic (PLEG): container finished" podID="f33a41bb-6406-4c73-8024-4acd72817832" containerID="48466c6ecf3b810bf1e304c5501f651d0eed5c6b8b657b951b311faf79acbb89" exitCode=0 Jan 26 11:29:14 crc kubenswrapper[4619]: I0126 11:29:14.991832 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerDied","Data":"48466c6ecf3b810bf1e304c5501f651d0eed5c6b8b657b951b311faf79acbb89"} Jan 26 11:29:14 crc kubenswrapper[4619]: I0126 11:29:14.992065 4619 scope.go:117] "RemoveContainer" containerID="3654ed6a2adbc8c9b03f469d4fac0d668f99b333c07f0e11f135d0c00798b1fe" Jan 26 11:29:16 crc kubenswrapper[4619]: I0126 11:29:16.001808 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerStarted","Data":"47a8165b1f28a2eae2faa552439f23be0c102480ff1ffbb9d2b68f383bce0e1b"} Jan 26 11:29:17 crc kubenswrapper[4619]: I0126 11:29:17.013852 4619 generic.go:334] "Generic (PLEG): container finished" podID="3098c4ac-7ae9-4af9-a23f-969054a718fe" containerID="20cc999b5419dd93810c9cda4e535efbfb881e7b4efa7be6a0c4c65819ce7e07" exitCode=0 Jan 26 11:29:17 crc kubenswrapper[4619]: I0126 11:29:17.013952 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v" event={"ID":"3098c4ac-7ae9-4af9-a23f-969054a718fe","Type":"ContainerDied","Data":"20cc999b5419dd93810c9cda4e535efbfb881e7b4efa7be6a0c4c65819ce7e07"} Jan 26 11:29:18 crc kubenswrapper[4619]: I0126 11:29:18.508515 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v" Jan 26 11:29:18 crc kubenswrapper[4619]: I0126 11:29:18.634217 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th22n\" (UniqueName: \"kubernetes.io/projected/3098c4ac-7ae9-4af9-a23f-969054a718fe-kube-api-access-th22n\") pod \"3098c4ac-7ae9-4af9-a23f-969054a718fe\" (UID: \"3098c4ac-7ae9-4af9-a23f-969054a718fe\") " Jan 26 11:29:18 crc kubenswrapper[4619]: I0126 11:29:18.634309 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3098c4ac-7ae9-4af9-a23f-969054a718fe-inventory\") pod \"3098c4ac-7ae9-4af9-a23f-969054a718fe\" (UID: \"3098c4ac-7ae9-4af9-a23f-969054a718fe\") " Jan 26 11:29:18 crc kubenswrapper[4619]: I0126 11:29:18.634605 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3098c4ac-7ae9-4af9-a23f-969054a718fe-ssh-key-openstack-edpm-ipam\") pod \"3098c4ac-7ae9-4af9-a23f-969054a718fe\" (UID: \"3098c4ac-7ae9-4af9-a23f-969054a718fe\") " Jan 26 11:29:18 crc kubenswrapper[4619]: I0126 11:29:18.649836 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3098c4ac-7ae9-4af9-a23f-969054a718fe-kube-api-access-th22n" (OuterVolumeSpecName: "kube-api-access-th22n") pod "3098c4ac-7ae9-4af9-a23f-969054a718fe" (UID: "3098c4ac-7ae9-4af9-a23f-969054a718fe"). InnerVolumeSpecName "kube-api-access-th22n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:29:18 crc kubenswrapper[4619]: I0126 11:29:18.672776 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3098c4ac-7ae9-4af9-a23f-969054a718fe-inventory" (OuterVolumeSpecName: "inventory") pod "3098c4ac-7ae9-4af9-a23f-969054a718fe" (UID: "3098c4ac-7ae9-4af9-a23f-969054a718fe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:29:18 crc kubenswrapper[4619]: I0126 11:29:18.673974 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3098c4ac-7ae9-4af9-a23f-969054a718fe-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3098c4ac-7ae9-4af9-a23f-969054a718fe" (UID: "3098c4ac-7ae9-4af9-a23f-969054a718fe"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:29:18 crc kubenswrapper[4619]: I0126 11:29:18.737200 4619 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3098c4ac-7ae9-4af9-a23f-969054a718fe-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 11:29:18 crc kubenswrapper[4619]: I0126 11:29:18.737255 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th22n\" (UniqueName: \"kubernetes.io/projected/3098c4ac-7ae9-4af9-a23f-969054a718fe-kube-api-access-th22n\") on node \"crc\" DevicePath \"\"" Jan 26 11:29:18 crc kubenswrapper[4619]: I0126 11:29:18.737270 4619 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3098c4ac-7ae9-4af9-a23f-969054a718fe-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.033972 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v" event={"ID":"3098c4ac-7ae9-4af9-a23f-969054a718fe","Type":"ContainerDied","Data":"097a3345715ed0692bec011e634d316156da08dbce84c8619d87f3f7e2942fbb"} Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.034030 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.034042 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="097a3345715ed0692bec011e634d316156da08dbce84c8619d87f3f7e2942fbb" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.128522 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4"] Jan 26 11:29:19 crc kubenswrapper[4619]: E0126 11:29:19.128922 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3098c4ac-7ae9-4af9-a23f-969054a718fe" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.128938 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="3098c4ac-7ae9-4af9-a23f-969054a718fe" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.129121 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="3098c4ac-7ae9-4af9-a23f-969054a718fe" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.129723 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.143446 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.143517 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.143537 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.143537 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.143633 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.143830 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.144043 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.144081 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fn84q" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.150837 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4"] Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.245233 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.245286 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.245348 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.245380 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/09e6155b-11d0-4cab-83c5-c215bac7c5d8-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.245460 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.245517 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/09e6155b-11d0-4cab-83c5-c215bac7c5d8-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.245542 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.245719 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.245852 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/09e6155b-11d0-4cab-83c5-c215bac7c5d8-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.245887 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.245940 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.246035 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2jvk\" (UniqueName: \"kubernetes.io/projected/09e6155b-11d0-4cab-83c5-c215bac7c5d8-kube-api-access-h2jvk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.246095 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.246123 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/09e6155b-11d0-4cab-83c5-c215bac7c5d8-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.348981 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2jvk\" (UniqueName: \"kubernetes.io/projected/09e6155b-11d0-4cab-83c5-c215bac7c5d8-kube-api-access-h2jvk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.349046 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.349082 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/09e6155b-11d0-4cab-83c5-c215bac7c5d8-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.349148 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.349182 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.349220 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.349258 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/09e6155b-11d0-4cab-83c5-c215bac7c5d8-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.349291 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.349378 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/09e6155b-11d0-4cab-83c5-c215bac7c5d8-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.349407 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.349494 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.349589 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/09e6155b-11d0-4cab-83c5-c215bac7c5d8-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.349701 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.349754 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.354537 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.354984 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/09e6155b-11d0-4cab-83c5-c215bac7c5d8-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.356007 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.364885 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.367967 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/09e6155b-11d0-4cab-83c5-c215bac7c5d8-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.369120 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2jvk\" (UniqueName: \"kubernetes.io/projected/09e6155b-11d0-4cab-83c5-c215bac7c5d8-kube-api-access-h2jvk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.372423 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.372511 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/09e6155b-11d0-4cab-83c5-c215bac7c5d8-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.373986 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.374668 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.374734 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/09e6155b-11d0-4cab-83c5-c215bac7c5d8-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.375957 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.376317 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.377217 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:19 crc kubenswrapper[4619]: I0126 11:29:19.447788 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:29:20 crc kubenswrapper[4619]: I0126 11:29:20.070706 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4"] Jan 26 11:29:20 crc kubenswrapper[4619]: W0126 11:29:20.072514 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09e6155b_11d0_4cab_83c5_c215bac7c5d8.slice/crio-d42ec0384134ebc8347ded974f7ee1cc3b55347ce114bcfaf1eecd94e28a08c6 WatchSource:0}: Error finding container d42ec0384134ebc8347ded974f7ee1cc3b55347ce114bcfaf1eecd94e28a08c6: Status 404 returned error can't find the container with id d42ec0384134ebc8347ded974f7ee1cc3b55347ce114bcfaf1eecd94e28a08c6 Jan 26 11:29:21 crc kubenswrapper[4619]: I0126 11:29:21.050219 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" event={"ID":"09e6155b-11d0-4cab-83c5-c215bac7c5d8","Type":"ContainerStarted","Data":"d42ec0384134ebc8347ded974f7ee1cc3b55347ce114bcfaf1eecd94e28a08c6"} Jan 26 11:29:22 crc kubenswrapper[4619]: I0126 11:29:22.062483 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" event={"ID":"09e6155b-11d0-4cab-83c5-c215bac7c5d8","Type":"ContainerStarted","Data":"2aeb625a1a876e15021d7d3d4f32608db3a45759f253e5d243256d21aef26eea"} Jan 26 11:29:22 crc kubenswrapper[4619]: I0126 11:29:22.094279 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" podStartSLOduration=2.460373568 podStartE2EDuration="3.094258024s" podCreationTimestamp="2026-01-26 11:29:19 +0000 UTC" firstStartedPulling="2026-01-26 11:29:20.084040476 +0000 UTC m=+2059.118081192" lastFinishedPulling="2026-01-26 11:29:20.717924932 +0000 UTC m=+2059.751965648" observedRunningTime="2026-01-26 11:29:22.086182984 +0000 UTC m=+2061.120223700" watchObservedRunningTime="2026-01-26 11:29:22.094258024 +0000 UTC m=+2061.128298740" Jan 26 11:29:22 crc kubenswrapper[4619]: E0126 11:29:22.687173 4619 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52d1c976_907c_4749_a9d2_e4a518578cbc.slice/crio-conmon-12eaa5c7077bc1ee43fb170af3d5f0b12cb97fffbe4778217c173d865146e6d8.scope\": RecentStats: unable to find data in memory cache]" Jan 26 11:29:32 crc kubenswrapper[4619]: E0126 11:29:32.925083 4619 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52d1c976_907c_4749_a9d2_e4a518578cbc.slice/crio-conmon-12eaa5c7077bc1ee43fb170af3d5f0b12cb97fffbe4778217c173d865146e6d8.scope\": RecentStats: unable to find data in memory cache]" Jan 26 11:29:43 crc kubenswrapper[4619]: E0126 11:29:43.165738 4619 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52d1c976_907c_4749_a9d2_e4a518578cbc.slice/crio-conmon-12eaa5c7077bc1ee43fb170af3d5f0b12cb97fffbe4778217c173d865146e6d8.scope\": RecentStats: unable to find data in memory cache]" Jan 26 11:29:53 crc kubenswrapper[4619]: E0126 11:29:53.403959 4619 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52d1c976_907c_4749_a9d2_e4a518578cbc.slice/crio-conmon-12eaa5c7077bc1ee43fb170af3d5f0b12cb97fffbe4778217c173d865146e6d8.scope\": RecentStats: unable to find data in memory cache]" Jan 26 11:30:00 crc kubenswrapper[4619]: I0126 11:30:00.145833 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490450-ccht4"] Jan 26 11:30:00 crc kubenswrapper[4619]: I0126 11:30:00.148873 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-ccht4" Jan 26 11:30:00 crc kubenswrapper[4619]: I0126 11:30:00.192485 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 11:30:00 crc kubenswrapper[4619]: I0126 11:30:00.193362 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 11:30:00 crc kubenswrapper[4619]: I0126 11:30:00.228088 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r58xn\" (UniqueName: \"kubernetes.io/projected/bfdb417c-be1d-4f9c-b28b-73aa77edf776-kube-api-access-r58xn\") pod \"collect-profiles-29490450-ccht4\" (UID: \"bfdb417c-be1d-4f9c-b28b-73aa77edf776\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-ccht4" Jan 26 11:30:00 crc kubenswrapper[4619]: I0126 11:30:00.228172 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bfdb417c-be1d-4f9c-b28b-73aa77edf776-secret-volume\") pod \"collect-profiles-29490450-ccht4\" (UID: \"bfdb417c-be1d-4f9c-b28b-73aa77edf776\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-ccht4" Jan 26 11:30:00 crc kubenswrapper[4619]: I0126 11:30:00.228256 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfdb417c-be1d-4f9c-b28b-73aa77edf776-config-volume\") pod \"collect-profiles-29490450-ccht4\" (UID: \"bfdb417c-be1d-4f9c-b28b-73aa77edf776\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-ccht4" Jan 26 11:30:00 crc kubenswrapper[4619]: I0126 11:30:00.234356 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490450-ccht4"] Jan 26 11:30:00 crc kubenswrapper[4619]: I0126 11:30:00.330153 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bfdb417c-be1d-4f9c-b28b-73aa77edf776-secret-volume\") pod \"collect-profiles-29490450-ccht4\" (UID: \"bfdb417c-be1d-4f9c-b28b-73aa77edf776\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-ccht4" Jan 26 11:30:00 crc kubenswrapper[4619]: I0126 11:30:00.330240 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfdb417c-be1d-4f9c-b28b-73aa77edf776-config-volume\") pod \"collect-profiles-29490450-ccht4\" (UID: \"bfdb417c-be1d-4f9c-b28b-73aa77edf776\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-ccht4" Jan 26 11:30:00 crc kubenswrapper[4619]: I0126 11:30:00.330369 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r58xn\" (UniqueName: \"kubernetes.io/projected/bfdb417c-be1d-4f9c-b28b-73aa77edf776-kube-api-access-r58xn\") pod \"collect-profiles-29490450-ccht4\" (UID: \"bfdb417c-be1d-4f9c-b28b-73aa77edf776\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-ccht4" Jan 26 11:30:00 crc kubenswrapper[4619]: I0126 11:30:00.331304 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfdb417c-be1d-4f9c-b28b-73aa77edf776-config-volume\") pod \"collect-profiles-29490450-ccht4\" (UID: \"bfdb417c-be1d-4f9c-b28b-73aa77edf776\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-ccht4" Jan 26 11:30:00 crc kubenswrapper[4619]: I0126 11:30:00.335933 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bfdb417c-be1d-4f9c-b28b-73aa77edf776-secret-volume\") pod \"collect-profiles-29490450-ccht4\" (UID: \"bfdb417c-be1d-4f9c-b28b-73aa77edf776\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-ccht4" Jan 26 11:30:00 crc kubenswrapper[4619]: I0126 11:30:00.347023 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r58xn\" (UniqueName: \"kubernetes.io/projected/bfdb417c-be1d-4f9c-b28b-73aa77edf776-kube-api-access-r58xn\") pod \"collect-profiles-29490450-ccht4\" (UID: \"bfdb417c-be1d-4f9c-b28b-73aa77edf776\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-ccht4" Jan 26 11:30:00 crc kubenswrapper[4619]: I0126 11:30:00.531194 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-ccht4" Jan 26 11:30:01 crc kubenswrapper[4619]: I0126 11:30:01.017938 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490450-ccht4"] Jan 26 11:30:01 crc kubenswrapper[4619]: I0126 11:30:01.409093 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-ccht4" event={"ID":"bfdb417c-be1d-4f9c-b28b-73aa77edf776","Type":"ContainerStarted","Data":"6ffb164bb30af599a620520ce923b06c6de6cee2e4216b6862b06ad6c4e214e4"} Jan 26 11:30:01 crc kubenswrapper[4619]: I0126 11:30:01.409465 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-ccht4" event={"ID":"bfdb417c-be1d-4f9c-b28b-73aa77edf776","Type":"ContainerStarted","Data":"1af642d9ef7a3b5bf4e581bd61cb15f800bc0200e59a9d03d75a239b4a6e36dc"} Jan 26 11:30:01 crc kubenswrapper[4619]: I0126 11:30:01.434064 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-ccht4" podStartSLOduration=1.434041116 podStartE2EDuration="1.434041116s" podCreationTimestamp="2026-01-26 11:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:30:01.424562978 +0000 UTC m=+2100.458603704" watchObservedRunningTime="2026-01-26 11:30:01.434041116 +0000 UTC m=+2100.468081832" Jan 26 11:30:02 crc kubenswrapper[4619]: I0126 11:30:02.425791 4619 generic.go:334] "Generic (PLEG): container finished" podID="09e6155b-11d0-4cab-83c5-c215bac7c5d8" containerID="2aeb625a1a876e15021d7d3d4f32608db3a45759f253e5d243256d21aef26eea" exitCode=0 Jan 26 11:30:02 crc kubenswrapper[4619]: I0126 11:30:02.425852 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" event={"ID":"09e6155b-11d0-4cab-83c5-c215bac7c5d8","Type":"ContainerDied","Data":"2aeb625a1a876e15021d7d3d4f32608db3a45759f253e5d243256d21aef26eea"} Jan 26 11:30:02 crc kubenswrapper[4619]: I0126 11:30:02.431016 4619 generic.go:334] "Generic (PLEG): container finished" podID="bfdb417c-be1d-4f9c-b28b-73aa77edf776" containerID="6ffb164bb30af599a620520ce923b06c6de6cee2e4216b6862b06ad6c4e214e4" exitCode=0 Jan 26 11:30:02 crc kubenswrapper[4619]: I0126 11:30:02.431065 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-ccht4" event={"ID":"bfdb417c-be1d-4f9c-b28b-73aa77edf776","Type":"ContainerDied","Data":"6ffb164bb30af599a620520ce923b06c6de6cee2e4216b6862b06ad6c4e214e4"} Jan 26 11:30:03 crc kubenswrapper[4619]: I0126 11:30:03.910573 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-ccht4" Jan 26 11:30:03 crc kubenswrapper[4619]: I0126 11:30:03.924095 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.111539 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-ovn-combined-ca-bundle\") pod \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.112558 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-ssh-key-openstack-edpm-ipam\") pod \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.112827 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/09e6155b-11d0-4cab-83c5-c215bac7c5d8-openstack-edpm-ipam-ovn-default-certs-0\") pod \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.112869 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/09e6155b-11d0-4cab-83c5-c215bac7c5d8-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.112890 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfdb417c-be1d-4f9c-b28b-73aa77edf776-config-volume\") pod \"bfdb417c-be1d-4f9c-b28b-73aa77edf776\" (UID: \"bfdb417c-be1d-4f9c-b28b-73aa77edf776\") " Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.112909 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-libvirt-combined-ca-bundle\") pod \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.112926 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-telemetry-combined-ca-bundle\") pod \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.112957 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-neutron-metadata-combined-ca-bundle\") pod \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.113263 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2jvk\" (UniqueName: \"kubernetes.io/projected/09e6155b-11d0-4cab-83c5-c215bac7c5d8-kube-api-access-h2jvk\") pod \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.113294 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-inventory\") pod \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.113401 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-nova-combined-ca-bundle\") pod \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.113479 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/09e6155b-11d0-4cab-83c5-c215bac7c5d8-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.113511 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-repo-setup-combined-ca-bundle\") pod \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.113532 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-bootstrap-combined-ca-bundle\") pod \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.113569 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r58xn\" (UniqueName: \"kubernetes.io/projected/bfdb417c-be1d-4f9c-b28b-73aa77edf776-kube-api-access-r58xn\") pod \"bfdb417c-be1d-4f9c-b28b-73aa77edf776\" (UID: \"bfdb417c-be1d-4f9c-b28b-73aa77edf776\") " Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.113596 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/09e6155b-11d0-4cab-83c5-c215bac7c5d8-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\" (UID: \"09e6155b-11d0-4cab-83c5-c215bac7c5d8\") " Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.113669 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfdb417c-be1d-4f9c-b28b-73aa77edf776-config-volume" (OuterVolumeSpecName: "config-volume") pod "bfdb417c-be1d-4f9c-b28b-73aa77edf776" (UID: "bfdb417c-be1d-4f9c-b28b-73aa77edf776"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.113722 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bfdb417c-be1d-4f9c-b28b-73aa77edf776-secret-volume\") pod \"bfdb417c-be1d-4f9c-b28b-73aa77edf776\" (UID: \"bfdb417c-be1d-4f9c-b28b-73aa77edf776\") " Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.114051 4619 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfdb417c-be1d-4f9c-b28b-73aa77edf776-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.118295 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09e6155b-11d0-4cab-83c5-c215bac7c5d8-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "09e6155b-11d0-4cab-83c5-c215bac7c5d8" (UID: "09e6155b-11d0-4cab-83c5-c215bac7c5d8"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.118422 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "09e6155b-11d0-4cab-83c5-c215bac7c5d8" (UID: "09e6155b-11d0-4cab-83c5-c215bac7c5d8"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.119155 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "09e6155b-11d0-4cab-83c5-c215bac7c5d8" (UID: "09e6155b-11d0-4cab-83c5-c215bac7c5d8"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.120402 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "09e6155b-11d0-4cab-83c5-c215bac7c5d8" (UID: "09e6155b-11d0-4cab-83c5-c215bac7c5d8"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.121098 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfdb417c-be1d-4f9c-b28b-73aa77edf776-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bfdb417c-be1d-4f9c-b28b-73aa77edf776" (UID: "bfdb417c-be1d-4f9c-b28b-73aa77edf776"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.122248 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "09e6155b-11d0-4cab-83c5-c215bac7c5d8" (UID: "09e6155b-11d0-4cab-83c5-c215bac7c5d8"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.123437 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfdb417c-be1d-4f9c-b28b-73aa77edf776-kube-api-access-r58xn" (OuterVolumeSpecName: "kube-api-access-r58xn") pod "bfdb417c-be1d-4f9c-b28b-73aa77edf776" (UID: "bfdb417c-be1d-4f9c-b28b-73aa77edf776"). InnerVolumeSpecName "kube-api-access-r58xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.124718 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09e6155b-11d0-4cab-83c5-c215bac7c5d8-kube-api-access-h2jvk" (OuterVolumeSpecName: "kube-api-access-h2jvk") pod "09e6155b-11d0-4cab-83c5-c215bac7c5d8" (UID: "09e6155b-11d0-4cab-83c5-c215bac7c5d8"). InnerVolumeSpecName "kube-api-access-h2jvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.124739 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "09e6155b-11d0-4cab-83c5-c215bac7c5d8" (UID: "09e6155b-11d0-4cab-83c5-c215bac7c5d8"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.125837 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "09e6155b-11d0-4cab-83c5-c215bac7c5d8" (UID: "09e6155b-11d0-4cab-83c5-c215bac7c5d8"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.126923 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09e6155b-11d0-4cab-83c5-c215bac7c5d8-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "09e6155b-11d0-4cab-83c5-c215bac7c5d8" (UID: "09e6155b-11d0-4cab-83c5-c215bac7c5d8"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.129725 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "09e6155b-11d0-4cab-83c5-c215bac7c5d8" (UID: "09e6155b-11d0-4cab-83c5-c215bac7c5d8"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.134518 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09e6155b-11d0-4cab-83c5-c215bac7c5d8-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "09e6155b-11d0-4cab-83c5-c215bac7c5d8" (UID: "09e6155b-11d0-4cab-83c5-c215bac7c5d8"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.134600 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09e6155b-11d0-4cab-83c5-c215bac7c5d8-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "09e6155b-11d0-4cab-83c5-c215bac7c5d8" (UID: "09e6155b-11d0-4cab-83c5-c215bac7c5d8"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.154213 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-inventory" (OuterVolumeSpecName: "inventory") pod "09e6155b-11d0-4cab-83c5-c215bac7c5d8" (UID: "09e6155b-11d0-4cab-83c5-c215bac7c5d8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.156063 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "09e6155b-11d0-4cab-83c5-c215bac7c5d8" (UID: "09e6155b-11d0-4cab-83c5-c215bac7c5d8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.215893 4619 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/09e6155b-11d0-4cab-83c5-c215bac7c5d8-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.215952 4619 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/09e6155b-11d0-4cab-83c5-c215bac7c5d8-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.215972 4619 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.215986 4619 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.215999 4619 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.216012 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2jvk\" (UniqueName: \"kubernetes.io/projected/09e6155b-11d0-4cab-83c5-c215bac7c5d8-kube-api-access-h2jvk\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.216055 4619 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.216067 4619 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.216080 4619 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/09e6155b-11d0-4cab-83c5-c215bac7c5d8-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.216093 4619 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.216106 4619 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.216118 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r58xn\" (UniqueName: \"kubernetes.io/projected/bfdb417c-be1d-4f9c-b28b-73aa77edf776-kube-api-access-r58xn\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.216131 4619 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/09e6155b-11d0-4cab-83c5-c215bac7c5d8-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.216143 4619 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bfdb417c-be1d-4f9c-b28b-73aa77edf776-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.216160 4619 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.216171 4619 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09e6155b-11d0-4cab-83c5-c215bac7c5d8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.409189 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490405-2tm2l"] Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.415566 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490405-2tm2l"] Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.448259 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-ccht4" event={"ID":"bfdb417c-be1d-4f9c-b28b-73aa77edf776","Type":"ContainerDied","Data":"1af642d9ef7a3b5bf4e581bd61cb15f800bc0200e59a9d03d75a239b4a6e36dc"} Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.448294 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1af642d9ef7a3b5bf4e581bd61cb15f800bc0200e59a9d03d75a239b4a6e36dc" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.448322 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-ccht4" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.455253 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" event={"ID":"09e6155b-11d0-4cab-83c5-c215bac7c5d8","Type":"ContainerDied","Data":"d42ec0384134ebc8347ded974f7ee1cc3b55347ce114bcfaf1eecd94e28a08c6"} Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.455321 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d42ec0384134ebc8347ded974f7ee1cc3b55347ce114bcfaf1eecd94e28a08c6" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.455374 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.610282 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-4dhp4"] Jan 26 11:30:04 crc kubenswrapper[4619]: E0126 11:30:04.610781 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfdb417c-be1d-4f9c-b28b-73aa77edf776" containerName="collect-profiles" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.610799 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfdb417c-be1d-4f9c-b28b-73aa77edf776" containerName="collect-profiles" Jan 26 11:30:04 crc kubenswrapper[4619]: E0126 11:30:04.610821 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09e6155b-11d0-4cab-83c5-c215bac7c5d8" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.610831 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="09e6155b-11d0-4cab-83c5-c215bac7c5d8" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.611035 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="09e6155b-11d0-4cab-83c5-c215bac7c5d8" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.611052 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfdb417c-be1d-4f9c-b28b-73aa77edf776" containerName="collect-profiles" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.612113 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4dhp4" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.614850 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fn84q" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.614921 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.615326 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.615429 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.615720 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.620862 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4dhp4\" (UID: \"1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4dhp4" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.621162 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzjtb\" (UniqueName: \"kubernetes.io/projected/1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99-kube-api-access-zzjtb\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4dhp4\" (UID: \"1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4dhp4" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.621204 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4dhp4\" (UID: \"1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4dhp4" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.621254 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4dhp4\" (UID: \"1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4dhp4" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.621365 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4dhp4\" (UID: \"1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4dhp4" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.630687 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-4dhp4"] Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.722510 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4dhp4\" (UID: \"1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4dhp4" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.722581 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzjtb\" (UniqueName: \"kubernetes.io/projected/1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99-kube-api-access-zzjtb\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4dhp4\" (UID: \"1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4dhp4" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.722606 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4dhp4\" (UID: \"1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4dhp4" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.722651 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4dhp4\" (UID: \"1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4dhp4" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.722712 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4dhp4\" (UID: \"1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4dhp4" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.723456 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4dhp4\" (UID: \"1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4dhp4" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.729701 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4dhp4\" (UID: \"1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4dhp4" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.730063 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4dhp4\" (UID: \"1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4dhp4" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.730846 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4dhp4\" (UID: \"1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4dhp4" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.745654 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzjtb\" (UniqueName: \"kubernetes.io/projected/1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99-kube-api-access-zzjtb\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4dhp4\" (UID: \"1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4dhp4" Jan 26 11:30:04 crc kubenswrapper[4619]: I0126 11:30:04.929270 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4dhp4" Jan 26 11:30:05 crc kubenswrapper[4619]: I0126 11:30:05.239897 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-4dhp4"] Jan 26 11:30:05 crc kubenswrapper[4619]: W0126 11:30:05.244240 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bc81dbb_9f20_43c0_a7a1_cdb5c13fee99.slice/crio-15d77ab86dd2f349bfcf3fe89b05034892d7e616732899f41b0e0a01ee1485c5 WatchSource:0}: Error finding container 15d77ab86dd2f349bfcf3fe89b05034892d7e616732899f41b0e0a01ee1485c5: Status 404 returned error can't find the container with id 15d77ab86dd2f349bfcf3fe89b05034892d7e616732899f41b0e0a01ee1485c5 Jan 26 11:30:05 crc kubenswrapper[4619]: I0126 11:30:05.275551 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="332f48dc-fb71-4e6e-bb33-c416af7b743b" path="/var/lib/kubelet/pods/332f48dc-fb71-4e6e-bb33-c416af7b743b/volumes" Jan 26 11:30:05 crc kubenswrapper[4619]: I0126 11:30:05.465716 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4dhp4" event={"ID":"1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99","Type":"ContainerStarted","Data":"15d77ab86dd2f349bfcf3fe89b05034892d7e616732899f41b0e0a01ee1485c5"} Jan 26 11:30:06 crc kubenswrapper[4619]: I0126 11:30:06.474892 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4dhp4" event={"ID":"1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99","Type":"ContainerStarted","Data":"0200390e764eb1450bbb94d544e14a88e3bb1034543cf436da4ab56eee8020fc"} Jan 26 11:30:06 crc kubenswrapper[4619]: I0126 11:30:06.489834 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4dhp4" podStartSLOduration=1.7594467040000001 podStartE2EDuration="2.489815122s" podCreationTimestamp="2026-01-26 11:30:04 +0000 UTC" firstStartedPulling="2026-01-26 11:30:05.246535947 +0000 UTC m=+2104.280576663" lastFinishedPulling="2026-01-26 11:30:05.976904365 +0000 UTC m=+2105.010945081" observedRunningTime="2026-01-26 11:30:06.487050637 +0000 UTC m=+2105.521091353" watchObservedRunningTime="2026-01-26 11:30:06.489815122 +0000 UTC m=+2105.523855848" Jan 26 11:30:32 crc kubenswrapper[4619]: I0126 11:30:32.665479 4619 scope.go:117] "RemoveContainer" containerID="51c15111aa612990cbfbdb938357cb97074a83a129a8f8f99d2a6242f45f94c2" Jan 26 11:30:57 crc kubenswrapper[4619]: I0126 11:30:57.905874 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-frx7k"] Jan 26 11:30:57 crc kubenswrapper[4619]: I0126 11:30:57.914219 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-frx7k" Jan 26 11:30:57 crc kubenswrapper[4619]: I0126 11:30:57.917068 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-frx7k"] Jan 26 11:30:58 crc kubenswrapper[4619]: I0126 11:30:58.004940 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/410f6ed9-1294-4895-ad81-1b06f889eab4-catalog-content\") pod \"community-operators-frx7k\" (UID: \"410f6ed9-1294-4895-ad81-1b06f889eab4\") " pod="openshift-marketplace/community-operators-frx7k" Jan 26 11:30:58 crc kubenswrapper[4619]: I0126 11:30:58.005018 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnvlz\" (UniqueName: \"kubernetes.io/projected/410f6ed9-1294-4895-ad81-1b06f889eab4-kube-api-access-cnvlz\") pod \"community-operators-frx7k\" (UID: \"410f6ed9-1294-4895-ad81-1b06f889eab4\") " pod="openshift-marketplace/community-operators-frx7k" Jan 26 11:30:58 crc kubenswrapper[4619]: I0126 11:30:58.005041 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/410f6ed9-1294-4895-ad81-1b06f889eab4-utilities\") pod \"community-operators-frx7k\" (UID: \"410f6ed9-1294-4895-ad81-1b06f889eab4\") " pod="openshift-marketplace/community-operators-frx7k" Jan 26 11:30:58 crc kubenswrapper[4619]: I0126 11:30:58.106762 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/410f6ed9-1294-4895-ad81-1b06f889eab4-catalog-content\") pod \"community-operators-frx7k\" (UID: \"410f6ed9-1294-4895-ad81-1b06f889eab4\") " pod="openshift-marketplace/community-operators-frx7k" Jan 26 11:30:58 crc kubenswrapper[4619]: I0126 11:30:58.107225 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnvlz\" (UniqueName: \"kubernetes.io/projected/410f6ed9-1294-4895-ad81-1b06f889eab4-kube-api-access-cnvlz\") pod \"community-operators-frx7k\" (UID: \"410f6ed9-1294-4895-ad81-1b06f889eab4\") " pod="openshift-marketplace/community-operators-frx7k" Jan 26 11:30:58 crc kubenswrapper[4619]: I0126 11:30:58.107373 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/410f6ed9-1294-4895-ad81-1b06f889eab4-utilities\") pod \"community-operators-frx7k\" (UID: \"410f6ed9-1294-4895-ad81-1b06f889eab4\") " pod="openshift-marketplace/community-operators-frx7k" Jan 26 11:30:58 crc kubenswrapper[4619]: I0126 11:30:58.107505 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/410f6ed9-1294-4895-ad81-1b06f889eab4-catalog-content\") pod \"community-operators-frx7k\" (UID: \"410f6ed9-1294-4895-ad81-1b06f889eab4\") " pod="openshift-marketplace/community-operators-frx7k" Jan 26 11:30:58 crc kubenswrapper[4619]: I0126 11:30:58.107702 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/410f6ed9-1294-4895-ad81-1b06f889eab4-utilities\") pod \"community-operators-frx7k\" (UID: \"410f6ed9-1294-4895-ad81-1b06f889eab4\") " pod="openshift-marketplace/community-operators-frx7k" Jan 26 11:30:58 crc kubenswrapper[4619]: I0126 11:30:58.127511 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnvlz\" (UniqueName: \"kubernetes.io/projected/410f6ed9-1294-4895-ad81-1b06f889eab4-kube-api-access-cnvlz\") pod \"community-operators-frx7k\" (UID: \"410f6ed9-1294-4895-ad81-1b06f889eab4\") " pod="openshift-marketplace/community-operators-frx7k" Jan 26 11:30:58 crc kubenswrapper[4619]: I0126 11:30:58.254047 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-frx7k" Jan 26 11:30:58 crc kubenswrapper[4619]: I0126 11:30:58.524517 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9wb7s"] Jan 26 11:30:58 crc kubenswrapper[4619]: I0126 11:30:58.527355 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9wb7s" Jan 26 11:30:58 crc kubenswrapper[4619]: I0126 11:30:58.536089 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9wb7s"] Jan 26 11:30:58 crc kubenswrapper[4619]: I0126 11:30:58.618641 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10989d81-82a2-4022-83e2-defddced5fe2-utilities\") pod \"redhat-operators-9wb7s\" (UID: \"10989d81-82a2-4022-83e2-defddced5fe2\") " pod="openshift-marketplace/redhat-operators-9wb7s" Jan 26 11:30:58 crc kubenswrapper[4619]: I0126 11:30:58.618784 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10989d81-82a2-4022-83e2-defddced5fe2-catalog-content\") pod \"redhat-operators-9wb7s\" (UID: \"10989d81-82a2-4022-83e2-defddced5fe2\") " pod="openshift-marketplace/redhat-operators-9wb7s" Jan 26 11:30:58 crc kubenswrapper[4619]: I0126 11:30:58.618895 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs277\" (UniqueName: \"kubernetes.io/projected/10989d81-82a2-4022-83e2-defddced5fe2-kube-api-access-fs277\") pod \"redhat-operators-9wb7s\" (UID: \"10989d81-82a2-4022-83e2-defddced5fe2\") " pod="openshift-marketplace/redhat-operators-9wb7s" Jan 26 11:30:58 crc kubenswrapper[4619]: I0126 11:30:58.723686 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10989d81-82a2-4022-83e2-defddced5fe2-catalog-content\") pod \"redhat-operators-9wb7s\" (UID: \"10989d81-82a2-4022-83e2-defddced5fe2\") " pod="openshift-marketplace/redhat-operators-9wb7s" Jan 26 11:30:58 crc kubenswrapper[4619]: I0126 11:30:58.723777 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs277\" (UniqueName: \"kubernetes.io/projected/10989d81-82a2-4022-83e2-defddced5fe2-kube-api-access-fs277\") pod \"redhat-operators-9wb7s\" (UID: \"10989d81-82a2-4022-83e2-defddced5fe2\") " pod="openshift-marketplace/redhat-operators-9wb7s" Jan 26 11:30:58 crc kubenswrapper[4619]: I0126 11:30:58.723825 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10989d81-82a2-4022-83e2-defddced5fe2-utilities\") pod \"redhat-operators-9wb7s\" (UID: \"10989d81-82a2-4022-83e2-defddced5fe2\") " pod="openshift-marketplace/redhat-operators-9wb7s" Jan 26 11:30:58 crc kubenswrapper[4619]: I0126 11:30:58.724313 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10989d81-82a2-4022-83e2-defddced5fe2-utilities\") pod \"redhat-operators-9wb7s\" (UID: \"10989d81-82a2-4022-83e2-defddced5fe2\") " pod="openshift-marketplace/redhat-operators-9wb7s" Jan 26 11:30:58 crc kubenswrapper[4619]: I0126 11:30:58.724538 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10989d81-82a2-4022-83e2-defddced5fe2-catalog-content\") pod \"redhat-operators-9wb7s\" (UID: \"10989d81-82a2-4022-83e2-defddced5fe2\") " pod="openshift-marketplace/redhat-operators-9wb7s" Jan 26 11:30:58 crc kubenswrapper[4619]: I0126 11:30:58.767229 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs277\" (UniqueName: \"kubernetes.io/projected/10989d81-82a2-4022-83e2-defddced5fe2-kube-api-access-fs277\") pod \"redhat-operators-9wb7s\" (UID: \"10989d81-82a2-4022-83e2-defddced5fe2\") " pod="openshift-marketplace/redhat-operators-9wb7s" Jan 26 11:30:58 crc kubenswrapper[4619]: I0126 11:30:58.877169 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9wb7s" Jan 26 11:30:58 crc kubenswrapper[4619]: I0126 11:30:58.893136 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-frx7k"] Jan 26 11:30:58 crc kubenswrapper[4619]: I0126 11:30:58.953717 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-frx7k" event={"ID":"410f6ed9-1294-4895-ad81-1b06f889eab4","Type":"ContainerStarted","Data":"b31d49b2806f1cd59e0031e914037ba166f9d22f2d362eb8b4ae0fe32c7b1836"} Jan 26 11:30:59 crc kubenswrapper[4619]: I0126 11:30:59.343829 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9wb7s"] Jan 26 11:30:59 crc kubenswrapper[4619]: I0126 11:30:59.965751 4619 generic.go:334] "Generic (PLEG): container finished" podID="10989d81-82a2-4022-83e2-defddced5fe2" containerID="452664cc984277e1ed3545ea673c54190cf641c0a4ddd8188cdafef852f3ff3a" exitCode=0 Jan 26 11:30:59 crc kubenswrapper[4619]: I0126 11:30:59.965795 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wb7s" event={"ID":"10989d81-82a2-4022-83e2-defddced5fe2","Type":"ContainerDied","Data":"452664cc984277e1ed3545ea673c54190cf641c0a4ddd8188cdafef852f3ff3a"} Jan 26 11:30:59 crc kubenswrapper[4619]: I0126 11:30:59.966150 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wb7s" event={"ID":"10989d81-82a2-4022-83e2-defddced5fe2","Type":"ContainerStarted","Data":"0ec2270a0a3848f4de2cc7c7588f8200ee499d095996cdeb623c9c7b28fa48b7"} Jan 26 11:30:59 crc kubenswrapper[4619]: I0126 11:30:59.969906 4619 generic.go:334] "Generic (PLEG): container finished" podID="410f6ed9-1294-4895-ad81-1b06f889eab4" containerID="f53d8821705f6796eca13795d4b7d51f286bcfebc11ff43b1adf88892cf8c22f" exitCode=0 Jan 26 11:30:59 crc kubenswrapper[4619]: I0126 11:30:59.969960 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-frx7k" event={"ID":"410f6ed9-1294-4895-ad81-1b06f889eab4","Type":"ContainerDied","Data":"f53d8821705f6796eca13795d4b7d51f286bcfebc11ff43b1adf88892cf8c22f"} Jan 26 11:31:00 crc kubenswrapper[4619]: I0126 11:31:00.983716 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wb7s" event={"ID":"10989d81-82a2-4022-83e2-defddced5fe2","Type":"ContainerStarted","Data":"a99a198e5d4442d8dc482983a4040b5b57931b43fd6c041e732dfd7e0cbf2c49"} Jan 26 11:31:01 crc kubenswrapper[4619]: I0126 11:31:01.097209 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-88ft6"] Jan 26 11:31:01 crc kubenswrapper[4619]: I0126 11:31:01.109638 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-88ft6" Jan 26 11:31:01 crc kubenswrapper[4619]: I0126 11:31:01.112545 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-88ft6"] Jan 26 11:31:01 crc kubenswrapper[4619]: I0126 11:31:01.173359 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2653625-569d-4e61-8cd4-9107edbf7952-utilities\") pod \"redhat-marketplace-88ft6\" (UID: \"a2653625-569d-4e61-8cd4-9107edbf7952\") " pod="openshift-marketplace/redhat-marketplace-88ft6" Jan 26 11:31:01 crc kubenswrapper[4619]: I0126 11:31:01.173421 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2653625-569d-4e61-8cd4-9107edbf7952-catalog-content\") pod \"redhat-marketplace-88ft6\" (UID: \"a2653625-569d-4e61-8cd4-9107edbf7952\") " pod="openshift-marketplace/redhat-marketplace-88ft6" Jan 26 11:31:01 crc kubenswrapper[4619]: I0126 11:31:01.173623 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mvr2\" (UniqueName: \"kubernetes.io/projected/a2653625-569d-4e61-8cd4-9107edbf7952-kube-api-access-5mvr2\") pod \"redhat-marketplace-88ft6\" (UID: \"a2653625-569d-4e61-8cd4-9107edbf7952\") " pod="openshift-marketplace/redhat-marketplace-88ft6" Jan 26 11:31:01 crc kubenswrapper[4619]: I0126 11:31:01.275202 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2653625-569d-4e61-8cd4-9107edbf7952-utilities\") pod \"redhat-marketplace-88ft6\" (UID: \"a2653625-569d-4e61-8cd4-9107edbf7952\") " pod="openshift-marketplace/redhat-marketplace-88ft6" Jan 26 11:31:01 crc kubenswrapper[4619]: I0126 11:31:01.275452 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2653625-569d-4e61-8cd4-9107edbf7952-catalog-content\") pod \"redhat-marketplace-88ft6\" (UID: \"a2653625-569d-4e61-8cd4-9107edbf7952\") " pod="openshift-marketplace/redhat-marketplace-88ft6" Jan 26 11:31:01 crc kubenswrapper[4619]: I0126 11:31:01.275521 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mvr2\" (UniqueName: \"kubernetes.io/projected/a2653625-569d-4e61-8cd4-9107edbf7952-kube-api-access-5mvr2\") pod \"redhat-marketplace-88ft6\" (UID: \"a2653625-569d-4e61-8cd4-9107edbf7952\") " pod="openshift-marketplace/redhat-marketplace-88ft6" Jan 26 11:31:01 crc kubenswrapper[4619]: I0126 11:31:01.276526 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2653625-569d-4e61-8cd4-9107edbf7952-utilities\") pod \"redhat-marketplace-88ft6\" (UID: \"a2653625-569d-4e61-8cd4-9107edbf7952\") " pod="openshift-marketplace/redhat-marketplace-88ft6" Jan 26 11:31:01 crc kubenswrapper[4619]: I0126 11:31:01.276750 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2653625-569d-4e61-8cd4-9107edbf7952-catalog-content\") pod \"redhat-marketplace-88ft6\" (UID: \"a2653625-569d-4e61-8cd4-9107edbf7952\") " pod="openshift-marketplace/redhat-marketplace-88ft6" Jan 26 11:31:01 crc kubenswrapper[4619]: I0126 11:31:01.303664 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mvr2\" (UniqueName: \"kubernetes.io/projected/a2653625-569d-4e61-8cd4-9107edbf7952-kube-api-access-5mvr2\") pod \"redhat-marketplace-88ft6\" (UID: \"a2653625-569d-4e61-8cd4-9107edbf7952\") " pod="openshift-marketplace/redhat-marketplace-88ft6" Jan 26 11:31:01 crc kubenswrapper[4619]: I0126 11:31:01.430734 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-88ft6" Jan 26 11:31:01 crc kubenswrapper[4619]: I0126 11:31:01.993367 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-frx7k" event={"ID":"410f6ed9-1294-4895-ad81-1b06f889eab4","Type":"ContainerStarted","Data":"938ea88515615ec0672f2439b5cc47f2adc83d415d9a1717dce11b3be96d45f0"} Jan 26 11:31:02 crc kubenswrapper[4619]: I0126 11:31:02.411414 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-88ft6"] Jan 26 11:31:03 crc kubenswrapper[4619]: I0126 11:31:03.001775 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-88ft6" event={"ID":"a2653625-569d-4e61-8cd4-9107edbf7952","Type":"ContainerStarted","Data":"dbc627d5a8e2ad563223b41d559cf03662bc7c5ecaa7949ee351bca6ca4b87ec"} Jan 26 11:31:03 crc kubenswrapper[4619]: I0126 11:31:03.002136 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-88ft6" event={"ID":"a2653625-569d-4e61-8cd4-9107edbf7952","Type":"ContainerStarted","Data":"f4687637e23856d915bacb2bc59a0e946e3acd2801f1950ffb20afd5fdb34219"} Jan 26 11:31:04 crc kubenswrapper[4619]: I0126 11:31:04.013162 4619 generic.go:334] "Generic (PLEG): container finished" podID="410f6ed9-1294-4895-ad81-1b06f889eab4" containerID="938ea88515615ec0672f2439b5cc47f2adc83d415d9a1717dce11b3be96d45f0" exitCode=0 Jan 26 11:31:04 crc kubenswrapper[4619]: I0126 11:31:04.013211 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-frx7k" event={"ID":"410f6ed9-1294-4895-ad81-1b06f889eab4","Type":"ContainerDied","Data":"938ea88515615ec0672f2439b5cc47f2adc83d415d9a1717dce11b3be96d45f0"} Jan 26 11:31:04 crc kubenswrapper[4619]: I0126 11:31:04.016391 4619 generic.go:334] "Generic (PLEG): container finished" podID="a2653625-569d-4e61-8cd4-9107edbf7952" containerID="dbc627d5a8e2ad563223b41d559cf03662bc7c5ecaa7949ee351bca6ca4b87ec" exitCode=0 Jan 26 11:31:04 crc kubenswrapper[4619]: I0126 11:31:04.016432 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-88ft6" event={"ID":"a2653625-569d-4e61-8cd4-9107edbf7952","Type":"ContainerDied","Data":"dbc627d5a8e2ad563223b41d559cf03662bc7c5ecaa7949ee351bca6ca4b87ec"} Jan 26 11:31:05 crc kubenswrapper[4619]: I0126 11:31:05.028438 4619 generic.go:334] "Generic (PLEG): container finished" podID="10989d81-82a2-4022-83e2-defddced5fe2" containerID="a99a198e5d4442d8dc482983a4040b5b57931b43fd6c041e732dfd7e0cbf2c49" exitCode=0 Jan 26 11:31:05 crc kubenswrapper[4619]: I0126 11:31:05.028489 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wb7s" event={"ID":"10989d81-82a2-4022-83e2-defddced5fe2","Type":"ContainerDied","Data":"a99a198e5d4442d8dc482983a4040b5b57931b43fd6c041e732dfd7e0cbf2c49"} Jan 26 11:31:06 crc kubenswrapper[4619]: I0126 11:31:06.045099 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wb7s" event={"ID":"10989d81-82a2-4022-83e2-defddced5fe2","Type":"ContainerStarted","Data":"a93d6f96c3fc84488cf6fe6091432f3c78d9235bb3a5d3ac6f172eea1e1e8e07"} Jan 26 11:31:06 crc kubenswrapper[4619]: I0126 11:31:06.047557 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-frx7k" event={"ID":"410f6ed9-1294-4895-ad81-1b06f889eab4","Type":"ContainerStarted","Data":"04c71da520509c5d4a84230bbebe1069181e32638cea362d2712cc0cdd6573b8"} Jan 26 11:31:06 crc kubenswrapper[4619]: I0126 11:31:06.049925 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-88ft6" event={"ID":"a2653625-569d-4e61-8cd4-9107edbf7952","Type":"ContainerStarted","Data":"e01198fe435dc6335a190dad63938d464d30fc22dad5089d3c1fc54b895b90a5"} Jan 26 11:31:06 crc kubenswrapper[4619]: I0126 11:31:06.099486 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9wb7s" podStartSLOduration=2.412205375 podStartE2EDuration="8.099464729s" podCreationTimestamp="2026-01-26 11:30:58 +0000 UTC" firstStartedPulling="2026-01-26 11:30:59.9674533 +0000 UTC m=+2159.001494016" lastFinishedPulling="2026-01-26 11:31:05.654712654 +0000 UTC m=+2164.688753370" observedRunningTime="2026-01-26 11:31:06.092706093 +0000 UTC m=+2165.126746829" watchObservedRunningTime="2026-01-26 11:31:06.099464729 +0000 UTC m=+2165.133505455" Jan 26 11:31:06 crc kubenswrapper[4619]: I0126 11:31:06.141557 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-frx7k" podStartSLOduration=3.63840729 podStartE2EDuration="9.141537272s" podCreationTimestamp="2026-01-26 11:30:57 +0000 UTC" firstStartedPulling="2026-01-26 11:30:59.971799912 +0000 UTC m=+2159.005840628" lastFinishedPulling="2026-01-26 11:31:05.474929894 +0000 UTC m=+2164.508970610" observedRunningTime="2026-01-26 11:31:06.139010966 +0000 UTC m=+2165.173051702" watchObservedRunningTime="2026-01-26 11:31:06.141537272 +0000 UTC m=+2165.175577998" Jan 26 11:31:08 crc kubenswrapper[4619]: I0126 11:31:08.070195 4619 generic.go:334] "Generic (PLEG): container finished" podID="a2653625-569d-4e61-8cd4-9107edbf7952" containerID="e01198fe435dc6335a190dad63938d464d30fc22dad5089d3c1fc54b895b90a5" exitCode=0 Jan 26 11:31:08 crc kubenswrapper[4619]: I0126 11:31:08.070256 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-88ft6" event={"ID":"a2653625-569d-4e61-8cd4-9107edbf7952","Type":"ContainerDied","Data":"e01198fe435dc6335a190dad63938d464d30fc22dad5089d3c1fc54b895b90a5"} Jan 26 11:31:08 crc kubenswrapper[4619]: I0126 11:31:08.254920 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-frx7k" Jan 26 11:31:08 crc kubenswrapper[4619]: I0126 11:31:08.255253 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-frx7k" Jan 26 11:31:08 crc kubenswrapper[4619]: I0126 11:31:08.877408 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9wb7s" Jan 26 11:31:08 crc kubenswrapper[4619]: I0126 11:31:08.877818 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9wb7s" Jan 26 11:31:09 crc kubenswrapper[4619]: I0126 11:31:09.086436 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-88ft6" event={"ID":"a2653625-569d-4e61-8cd4-9107edbf7952","Type":"ContainerStarted","Data":"3092ff9d44f500aedfa10d7ad959eb107b5cb9afba411f6961e99889c90fb6dd"} Jan 26 11:31:09 crc kubenswrapper[4619]: I0126 11:31:09.105964 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-88ft6" podStartSLOduration=3.720368861 podStartE2EDuration="8.105946817s" podCreationTimestamp="2026-01-26 11:31:01 +0000 UTC" firstStartedPulling="2026-01-26 11:31:04.160735961 +0000 UTC m=+2163.194776677" lastFinishedPulling="2026-01-26 11:31:08.546313927 +0000 UTC m=+2167.580354633" observedRunningTime="2026-01-26 11:31:09.10300861 +0000 UTC m=+2168.137049326" watchObservedRunningTime="2026-01-26 11:31:09.105946817 +0000 UTC m=+2168.139987533" Jan 26 11:31:09 crc kubenswrapper[4619]: I0126 11:31:09.299986 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-frx7k" podUID="410f6ed9-1294-4895-ad81-1b06f889eab4" containerName="registry-server" probeResult="failure" output=< Jan 26 11:31:09 crc kubenswrapper[4619]: timeout: failed to connect service ":50051" within 1s Jan 26 11:31:09 crc kubenswrapper[4619]: > Jan 26 11:31:10 crc kubenswrapper[4619]: I0126 11:31:10.008539 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9wb7s" podUID="10989d81-82a2-4022-83e2-defddced5fe2" containerName="registry-server" probeResult="failure" output=< Jan 26 11:31:10 crc kubenswrapper[4619]: timeout: failed to connect service ":50051" within 1s Jan 26 11:31:10 crc kubenswrapper[4619]: > Jan 26 11:31:11 crc kubenswrapper[4619]: I0126 11:31:11.431262 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-88ft6" Jan 26 11:31:11 crc kubenswrapper[4619]: I0126 11:31:11.431647 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-88ft6" Jan 26 11:31:11 crc kubenswrapper[4619]: I0126 11:31:11.480670 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-88ft6" Jan 26 11:31:16 crc kubenswrapper[4619]: I0126 11:31:16.143531 4619 generic.go:334] "Generic (PLEG): container finished" podID="1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99" containerID="0200390e764eb1450bbb94d544e14a88e3bb1034543cf436da4ab56eee8020fc" exitCode=0 Jan 26 11:31:16 crc kubenswrapper[4619]: I0126 11:31:16.143588 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4dhp4" event={"ID":"1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99","Type":"ContainerDied","Data":"0200390e764eb1450bbb94d544e14a88e3bb1034543cf436da4ab56eee8020fc"} Jan 26 11:31:17 crc kubenswrapper[4619]: I0126 11:31:17.664167 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4dhp4" Jan 26 11:31:17 crc kubenswrapper[4619]: I0126 11:31:17.835877 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99-ssh-key-openstack-edpm-ipam\") pod \"1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99\" (UID: \"1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99\") " Jan 26 11:31:17 crc kubenswrapper[4619]: I0126 11:31:17.835940 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzjtb\" (UniqueName: \"kubernetes.io/projected/1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99-kube-api-access-zzjtb\") pod \"1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99\" (UID: \"1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99\") " Jan 26 11:31:17 crc kubenswrapper[4619]: I0126 11:31:17.835962 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99-inventory\") pod \"1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99\" (UID: \"1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99\") " Jan 26 11:31:17 crc kubenswrapper[4619]: I0126 11:31:17.836025 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99-ovncontroller-config-0\") pod \"1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99\" (UID: \"1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99\") " Jan 26 11:31:17 crc kubenswrapper[4619]: I0126 11:31:17.836071 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99-ovn-combined-ca-bundle\") pod \"1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99\" (UID: \"1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99\") " Jan 26 11:31:17 crc kubenswrapper[4619]: I0126 11:31:17.842539 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99" (UID: "1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:31:17 crc kubenswrapper[4619]: I0126 11:31:17.853916 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99-kube-api-access-zzjtb" (OuterVolumeSpecName: "kube-api-access-zzjtb") pod "1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99" (UID: "1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99"). InnerVolumeSpecName "kube-api-access-zzjtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:31:17 crc kubenswrapper[4619]: I0126 11:31:17.865973 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99" (UID: "1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:31:17 crc kubenswrapper[4619]: I0126 11:31:17.873864 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99-inventory" (OuterVolumeSpecName: "inventory") pod "1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99" (UID: "1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:31:17 crc kubenswrapper[4619]: I0126 11:31:17.882513 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99" (UID: "1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:31:17 crc kubenswrapper[4619]: I0126 11:31:17.938408 4619 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:31:17 crc kubenswrapper[4619]: I0126 11:31:17.938438 4619 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 11:31:17 crc kubenswrapper[4619]: I0126 11:31:17.938449 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzjtb\" (UniqueName: \"kubernetes.io/projected/1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99-kube-api-access-zzjtb\") on node \"crc\" DevicePath \"\"" Jan 26 11:31:17 crc kubenswrapper[4619]: I0126 11:31:17.938459 4619 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 11:31:17 crc kubenswrapper[4619]: I0126 11:31:17.938468 4619 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.162271 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4dhp4" event={"ID":"1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99","Type":"ContainerDied","Data":"15d77ab86dd2f349bfcf3fe89b05034892d7e616732899f41b0e0a01ee1485c5"} Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.162311 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4dhp4" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.162318 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15d77ab86dd2f349bfcf3fe89b05034892d7e616732899f41b0e0a01ee1485c5" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.307550 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-frx7k" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.308143 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6"] Jan 26 11:31:18 crc kubenswrapper[4619]: E0126 11:31:18.308482 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.308499 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.308863 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.309497 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.311217 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.311383 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.311518 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.311734 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.311882 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.314987 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fn84q" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.322717 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6"] Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.379552 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-frx7k" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.449815 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f530175-ddda-4a1c-a437-af3747bb0da9-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6\" (UID: \"5f530175-ddda-4a1c-a437-af3747bb0da9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.450098 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5f530175-ddda-4a1c-a437-af3747bb0da9-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6\" (UID: \"5f530175-ddda-4a1c-a437-af3747bb0da9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.450154 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f530175-ddda-4a1c-a437-af3747bb0da9-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6\" (UID: \"5f530175-ddda-4a1c-a437-af3747bb0da9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.450343 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkvdv\" (UniqueName: \"kubernetes.io/projected/5f530175-ddda-4a1c-a437-af3747bb0da9-kube-api-access-rkvdv\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6\" (UID: \"5f530175-ddda-4a1c-a437-af3747bb0da9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.450390 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5f530175-ddda-4a1c-a437-af3747bb0da9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6\" (UID: \"5f530175-ddda-4a1c-a437-af3747bb0da9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.450456 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5f530175-ddda-4a1c-a437-af3747bb0da9-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6\" (UID: \"5f530175-ddda-4a1c-a437-af3747bb0da9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.549256 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-frx7k"] Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.552141 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkvdv\" (UniqueName: \"kubernetes.io/projected/5f530175-ddda-4a1c-a437-af3747bb0da9-kube-api-access-rkvdv\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6\" (UID: \"5f530175-ddda-4a1c-a437-af3747bb0da9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.552190 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5f530175-ddda-4a1c-a437-af3747bb0da9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6\" (UID: \"5f530175-ddda-4a1c-a437-af3747bb0da9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.552229 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5f530175-ddda-4a1c-a437-af3747bb0da9-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6\" (UID: \"5f530175-ddda-4a1c-a437-af3747bb0da9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.552288 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f530175-ddda-4a1c-a437-af3747bb0da9-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6\" (UID: \"5f530175-ddda-4a1c-a437-af3747bb0da9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.552373 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5f530175-ddda-4a1c-a437-af3747bb0da9-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6\" (UID: \"5f530175-ddda-4a1c-a437-af3747bb0da9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.552400 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f530175-ddda-4a1c-a437-af3747bb0da9-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6\" (UID: \"5f530175-ddda-4a1c-a437-af3747bb0da9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.557049 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5f530175-ddda-4a1c-a437-af3747bb0da9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6\" (UID: \"5f530175-ddda-4a1c-a437-af3747bb0da9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.557144 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5f530175-ddda-4a1c-a437-af3747bb0da9-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6\" (UID: \"5f530175-ddda-4a1c-a437-af3747bb0da9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.557413 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5f530175-ddda-4a1c-a437-af3747bb0da9-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6\" (UID: \"5f530175-ddda-4a1c-a437-af3747bb0da9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.557688 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f530175-ddda-4a1c-a437-af3747bb0da9-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6\" (UID: \"5f530175-ddda-4a1c-a437-af3747bb0da9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.558336 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f530175-ddda-4a1c-a437-af3747bb0da9-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6\" (UID: \"5f530175-ddda-4a1c-a437-af3747bb0da9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.572133 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkvdv\" (UniqueName: \"kubernetes.io/projected/5f530175-ddda-4a1c-a437-af3747bb0da9-kube-api-access-rkvdv\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6\" (UID: \"5f530175-ddda-4a1c-a437-af3747bb0da9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.627783 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.940988 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9wb7s" Jan 26 11:31:18 crc kubenswrapper[4619]: I0126 11:31:18.991956 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9wb7s" Jan 26 11:31:19 crc kubenswrapper[4619]: I0126 11:31:19.227673 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6"] Jan 26 11:31:20 crc kubenswrapper[4619]: I0126 11:31:20.181664 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-frx7k" podUID="410f6ed9-1294-4895-ad81-1b06f889eab4" containerName="registry-server" containerID="cri-o://04c71da520509c5d4a84230bbebe1069181e32638cea362d2712cc0cdd6573b8" gracePeriod=2 Jan 26 11:31:20 crc kubenswrapper[4619]: I0126 11:31:20.182089 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6" event={"ID":"5f530175-ddda-4a1c-a437-af3747bb0da9","Type":"ContainerStarted","Data":"81f0300a68a4dd4d2ec624c9b27c24c48454bf41afae9efa3ddc6d201d749022"} Jan 26 11:31:20 crc kubenswrapper[4619]: I0126 11:31:20.182119 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6" event={"ID":"5f530175-ddda-4a1c-a437-af3747bb0da9","Type":"ContainerStarted","Data":"c72fe162160a55b2bbfa0063ec5bdbe1a7a057a92a3c06344387293ee0946619"} Jan 26 11:31:20 crc kubenswrapper[4619]: I0126 11:31:20.200232 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6" podStartSLOduration=1.661757545 podStartE2EDuration="2.200211574s" podCreationTimestamp="2026-01-26 11:31:18 +0000 UTC" firstStartedPulling="2026-01-26 11:31:19.240771917 +0000 UTC m=+2178.274812653" lastFinishedPulling="2026-01-26 11:31:19.779225966 +0000 UTC m=+2178.813266682" observedRunningTime="2026-01-26 11:31:20.199490285 +0000 UTC m=+2179.233531021" watchObservedRunningTime="2026-01-26 11:31:20.200211574 +0000 UTC m=+2179.234252290" Jan 26 11:31:20 crc kubenswrapper[4619]: I0126 11:31:20.659089 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-frx7k" Jan 26 11:31:20 crc kubenswrapper[4619]: I0126 11:31:20.797903 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/410f6ed9-1294-4895-ad81-1b06f889eab4-catalog-content\") pod \"410f6ed9-1294-4895-ad81-1b06f889eab4\" (UID: \"410f6ed9-1294-4895-ad81-1b06f889eab4\") " Jan 26 11:31:20 crc kubenswrapper[4619]: I0126 11:31:20.797993 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnvlz\" (UniqueName: \"kubernetes.io/projected/410f6ed9-1294-4895-ad81-1b06f889eab4-kube-api-access-cnvlz\") pod \"410f6ed9-1294-4895-ad81-1b06f889eab4\" (UID: \"410f6ed9-1294-4895-ad81-1b06f889eab4\") " Jan 26 11:31:20 crc kubenswrapper[4619]: I0126 11:31:20.798049 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/410f6ed9-1294-4895-ad81-1b06f889eab4-utilities\") pod \"410f6ed9-1294-4895-ad81-1b06f889eab4\" (UID: \"410f6ed9-1294-4895-ad81-1b06f889eab4\") " Jan 26 11:31:20 crc kubenswrapper[4619]: I0126 11:31:20.798968 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/410f6ed9-1294-4895-ad81-1b06f889eab4-utilities" (OuterVolumeSpecName: "utilities") pod "410f6ed9-1294-4895-ad81-1b06f889eab4" (UID: "410f6ed9-1294-4895-ad81-1b06f889eab4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:31:20 crc kubenswrapper[4619]: I0126 11:31:20.803493 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/410f6ed9-1294-4895-ad81-1b06f889eab4-kube-api-access-cnvlz" (OuterVolumeSpecName: "kube-api-access-cnvlz") pod "410f6ed9-1294-4895-ad81-1b06f889eab4" (UID: "410f6ed9-1294-4895-ad81-1b06f889eab4"). InnerVolumeSpecName "kube-api-access-cnvlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:31:20 crc kubenswrapper[4619]: I0126 11:31:20.870392 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/410f6ed9-1294-4895-ad81-1b06f889eab4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "410f6ed9-1294-4895-ad81-1b06f889eab4" (UID: "410f6ed9-1294-4895-ad81-1b06f889eab4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:31:20 crc kubenswrapper[4619]: I0126 11:31:20.901441 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/410f6ed9-1294-4895-ad81-1b06f889eab4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:31:20 crc kubenswrapper[4619]: I0126 11:31:20.901487 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnvlz\" (UniqueName: \"kubernetes.io/projected/410f6ed9-1294-4895-ad81-1b06f889eab4-kube-api-access-cnvlz\") on node \"crc\" DevicePath \"\"" Jan 26 11:31:20 crc kubenswrapper[4619]: I0126 11:31:20.901497 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/410f6ed9-1294-4895-ad81-1b06f889eab4-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:31:21 crc kubenswrapper[4619]: I0126 11:31:21.152043 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9wb7s"] Jan 26 11:31:21 crc kubenswrapper[4619]: I0126 11:31:21.152597 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9wb7s" podUID="10989d81-82a2-4022-83e2-defddced5fe2" containerName="registry-server" containerID="cri-o://a93d6f96c3fc84488cf6fe6091432f3c78d9235bb3a5d3ac6f172eea1e1e8e07" gracePeriod=2 Jan 26 11:31:21 crc kubenswrapper[4619]: I0126 11:31:21.190689 4619 generic.go:334] "Generic (PLEG): container finished" podID="410f6ed9-1294-4895-ad81-1b06f889eab4" containerID="04c71da520509c5d4a84230bbebe1069181e32638cea362d2712cc0cdd6573b8" exitCode=0 Jan 26 11:31:21 crc kubenswrapper[4619]: I0126 11:31:21.191015 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-frx7k" event={"ID":"410f6ed9-1294-4895-ad81-1b06f889eab4","Type":"ContainerDied","Data":"04c71da520509c5d4a84230bbebe1069181e32638cea362d2712cc0cdd6573b8"} Jan 26 11:31:21 crc kubenswrapper[4619]: I0126 11:31:21.191042 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-frx7k" Jan 26 11:31:21 crc kubenswrapper[4619]: I0126 11:31:21.191058 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-frx7k" event={"ID":"410f6ed9-1294-4895-ad81-1b06f889eab4","Type":"ContainerDied","Data":"b31d49b2806f1cd59e0031e914037ba166f9d22f2d362eb8b4ae0fe32c7b1836"} Jan 26 11:31:21 crc kubenswrapper[4619]: I0126 11:31:21.191092 4619 scope.go:117] "RemoveContainer" containerID="04c71da520509c5d4a84230bbebe1069181e32638cea362d2712cc0cdd6573b8" Jan 26 11:31:21 crc kubenswrapper[4619]: I0126 11:31:21.215936 4619 scope.go:117] "RemoveContainer" containerID="938ea88515615ec0672f2439b5cc47f2adc83d415d9a1717dce11b3be96d45f0" Jan 26 11:31:21 crc kubenswrapper[4619]: I0126 11:31:21.247365 4619 scope.go:117] "RemoveContainer" containerID="f53d8821705f6796eca13795d4b7d51f286bcfebc11ff43b1adf88892cf8c22f" Jan 26 11:31:21 crc kubenswrapper[4619]: I0126 11:31:21.249248 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-frx7k"] Jan 26 11:31:21 crc kubenswrapper[4619]: I0126 11:31:21.289311 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-frx7k"] Jan 26 11:31:21 crc kubenswrapper[4619]: I0126 11:31:21.365510 4619 scope.go:117] "RemoveContainer" containerID="04c71da520509c5d4a84230bbebe1069181e32638cea362d2712cc0cdd6573b8" Jan 26 11:31:21 crc kubenswrapper[4619]: E0126 11:31:21.366480 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04c71da520509c5d4a84230bbebe1069181e32638cea362d2712cc0cdd6573b8\": container with ID starting with 04c71da520509c5d4a84230bbebe1069181e32638cea362d2712cc0cdd6573b8 not found: ID does not exist" containerID="04c71da520509c5d4a84230bbebe1069181e32638cea362d2712cc0cdd6573b8" Jan 26 11:31:21 crc kubenswrapper[4619]: I0126 11:31:21.366606 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04c71da520509c5d4a84230bbebe1069181e32638cea362d2712cc0cdd6573b8"} err="failed to get container status \"04c71da520509c5d4a84230bbebe1069181e32638cea362d2712cc0cdd6573b8\": rpc error: code = NotFound desc = could not find container \"04c71da520509c5d4a84230bbebe1069181e32638cea362d2712cc0cdd6573b8\": container with ID starting with 04c71da520509c5d4a84230bbebe1069181e32638cea362d2712cc0cdd6573b8 not found: ID does not exist" Jan 26 11:31:21 crc kubenswrapper[4619]: I0126 11:31:21.366701 4619 scope.go:117] "RemoveContainer" containerID="938ea88515615ec0672f2439b5cc47f2adc83d415d9a1717dce11b3be96d45f0" Jan 26 11:31:21 crc kubenswrapper[4619]: E0126 11:31:21.367688 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"938ea88515615ec0672f2439b5cc47f2adc83d415d9a1717dce11b3be96d45f0\": container with ID starting with 938ea88515615ec0672f2439b5cc47f2adc83d415d9a1717dce11b3be96d45f0 not found: ID does not exist" containerID="938ea88515615ec0672f2439b5cc47f2adc83d415d9a1717dce11b3be96d45f0" Jan 26 11:31:21 crc kubenswrapper[4619]: I0126 11:31:21.367781 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"938ea88515615ec0672f2439b5cc47f2adc83d415d9a1717dce11b3be96d45f0"} err="failed to get container status \"938ea88515615ec0672f2439b5cc47f2adc83d415d9a1717dce11b3be96d45f0\": rpc error: code = NotFound desc = could not find container \"938ea88515615ec0672f2439b5cc47f2adc83d415d9a1717dce11b3be96d45f0\": container with ID starting with 938ea88515615ec0672f2439b5cc47f2adc83d415d9a1717dce11b3be96d45f0 not found: ID does not exist" Jan 26 11:31:21 crc kubenswrapper[4619]: I0126 11:31:21.367869 4619 scope.go:117] "RemoveContainer" containerID="f53d8821705f6796eca13795d4b7d51f286bcfebc11ff43b1adf88892cf8c22f" Jan 26 11:31:21 crc kubenswrapper[4619]: E0126 11:31:21.368533 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f53d8821705f6796eca13795d4b7d51f286bcfebc11ff43b1adf88892cf8c22f\": container with ID starting with f53d8821705f6796eca13795d4b7d51f286bcfebc11ff43b1adf88892cf8c22f not found: ID does not exist" containerID="f53d8821705f6796eca13795d4b7d51f286bcfebc11ff43b1adf88892cf8c22f" Jan 26 11:31:21 crc kubenswrapper[4619]: I0126 11:31:21.368643 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f53d8821705f6796eca13795d4b7d51f286bcfebc11ff43b1adf88892cf8c22f"} err="failed to get container status \"f53d8821705f6796eca13795d4b7d51f286bcfebc11ff43b1adf88892cf8c22f\": rpc error: code = NotFound desc = could not find container \"f53d8821705f6796eca13795d4b7d51f286bcfebc11ff43b1adf88892cf8c22f\": container with ID starting with f53d8821705f6796eca13795d4b7d51f286bcfebc11ff43b1adf88892cf8c22f not found: ID does not exist" Jan 26 11:31:21 crc kubenswrapper[4619]: I0126 11:31:21.478313 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-88ft6" Jan 26 11:31:21 crc kubenswrapper[4619]: I0126 11:31:21.626987 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9wb7s" Jan 26 11:31:21 crc kubenswrapper[4619]: I0126 11:31:21.827771 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10989d81-82a2-4022-83e2-defddced5fe2-catalog-content\") pod \"10989d81-82a2-4022-83e2-defddced5fe2\" (UID: \"10989d81-82a2-4022-83e2-defddced5fe2\") " Jan 26 11:31:21 crc kubenswrapper[4619]: I0126 11:31:21.827954 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10989d81-82a2-4022-83e2-defddced5fe2-utilities\") pod \"10989d81-82a2-4022-83e2-defddced5fe2\" (UID: \"10989d81-82a2-4022-83e2-defddced5fe2\") " Jan 26 11:31:21 crc kubenswrapper[4619]: I0126 11:31:21.828147 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs277\" (UniqueName: \"kubernetes.io/projected/10989d81-82a2-4022-83e2-defddced5fe2-kube-api-access-fs277\") pod \"10989d81-82a2-4022-83e2-defddced5fe2\" (UID: \"10989d81-82a2-4022-83e2-defddced5fe2\") " Jan 26 11:31:21 crc kubenswrapper[4619]: I0126 11:31:21.828631 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10989d81-82a2-4022-83e2-defddced5fe2-utilities" (OuterVolumeSpecName: "utilities") pod "10989d81-82a2-4022-83e2-defddced5fe2" (UID: "10989d81-82a2-4022-83e2-defddced5fe2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:31:21 crc kubenswrapper[4619]: I0126 11:31:21.833156 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10989d81-82a2-4022-83e2-defddced5fe2-kube-api-access-fs277" (OuterVolumeSpecName: "kube-api-access-fs277") pod "10989d81-82a2-4022-83e2-defddced5fe2" (UID: "10989d81-82a2-4022-83e2-defddced5fe2"). InnerVolumeSpecName "kube-api-access-fs277". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:31:21 crc kubenswrapper[4619]: I0126 11:31:21.930115 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10989d81-82a2-4022-83e2-defddced5fe2-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:31:21 crc kubenswrapper[4619]: I0126 11:31:21.930148 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fs277\" (UniqueName: \"kubernetes.io/projected/10989d81-82a2-4022-83e2-defddced5fe2-kube-api-access-fs277\") on node \"crc\" DevicePath \"\"" Jan 26 11:31:21 crc kubenswrapper[4619]: I0126 11:31:21.953048 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10989d81-82a2-4022-83e2-defddced5fe2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10989d81-82a2-4022-83e2-defddced5fe2" (UID: "10989d81-82a2-4022-83e2-defddced5fe2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:31:22 crc kubenswrapper[4619]: I0126 11:31:22.032183 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10989d81-82a2-4022-83e2-defddced5fe2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:31:22 crc kubenswrapper[4619]: I0126 11:31:22.200420 4619 generic.go:334] "Generic (PLEG): container finished" podID="10989d81-82a2-4022-83e2-defddced5fe2" containerID="a93d6f96c3fc84488cf6fe6091432f3c78d9235bb3a5d3ac6f172eea1e1e8e07" exitCode=0 Jan 26 11:31:22 crc kubenswrapper[4619]: I0126 11:31:22.200520 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9wb7s" Jan 26 11:31:22 crc kubenswrapper[4619]: I0126 11:31:22.200514 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wb7s" event={"ID":"10989d81-82a2-4022-83e2-defddced5fe2","Type":"ContainerDied","Data":"a93d6f96c3fc84488cf6fe6091432f3c78d9235bb3a5d3ac6f172eea1e1e8e07"} Jan 26 11:31:22 crc kubenswrapper[4619]: I0126 11:31:22.200662 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wb7s" event={"ID":"10989d81-82a2-4022-83e2-defddced5fe2","Type":"ContainerDied","Data":"0ec2270a0a3848f4de2cc7c7588f8200ee499d095996cdeb623c9c7b28fa48b7"} Jan 26 11:31:22 crc kubenswrapper[4619]: I0126 11:31:22.200690 4619 scope.go:117] "RemoveContainer" containerID="a93d6f96c3fc84488cf6fe6091432f3c78d9235bb3a5d3ac6f172eea1e1e8e07" Jan 26 11:31:22 crc kubenswrapper[4619]: I0126 11:31:22.219654 4619 scope.go:117] "RemoveContainer" containerID="a99a198e5d4442d8dc482983a4040b5b57931b43fd6c041e732dfd7e0cbf2c49" Jan 26 11:31:22 crc kubenswrapper[4619]: I0126 11:31:22.241237 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9wb7s"] Jan 26 11:31:22 crc kubenswrapper[4619]: I0126 11:31:22.252898 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9wb7s"] Jan 26 11:31:22 crc kubenswrapper[4619]: I0126 11:31:22.264906 4619 scope.go:117] "RemoveContainer" containerID="452664cc984277e1ed3545ea673c54190cf641c0a4ddd8188cdafef852f3ff3a" Jan 26 11:31:22 crc kubenswrapper[4619]: I0126 11:31:22.293147 4619 scope.go:117] "RemoveContainer" containerID="a93d6f96c3fc84488cf6fe6091432f3c78d9235bb3a5d3ac6f172eea1e1e8e07" Jan 26 11:31:22 crc kubenswrapper[4619]: E0126 11:31:22.293647 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a93d6f96c3fc84488cf6fe6091432f3c78d9235bb3a5d3ac6f172eea1e1e8e07\": container with ID starting with a93d6f96c3fc84488cf6fe6091432f3c78d9235bb3a5d3ac6f172eea1e1e8e07 not found: ID does not exist" containerID="a93d6f96c3fc84488cf6fe6091432f3c78d9235bb3a5d3ac6f172eea1e1e8e07" Jan 26 11:31:22 crc kubenswrapper[4619]: I0126 11:31:22.293676 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a93d6f96c3fc84488cf6fe6091432f3c78d9235bb3a5d3ac6f172eea1e1e8e07"} err="failed to get container status \"a93d6f96c3fc84488cf6fe6091432f3c78d9235bb3a5d3ac6f172eea1e1e8e07\": rpc error: code = NotFound desc = could not find container \"a93d6f96c3fc84488cf6fe6091432f3c78d9235bb3a5d3ac6f172eea1e1e8e07\": container with ID starting with a93d6f96c3fc84488cf6fe6091432f3c78d9235bb3a5d3ac6f172eea1e1e8e07 not found: ID does not exist" Jan 26 11:31:22 crc kubenswrapper[4619]: I0126 11:31:22.293697 4619 scope.go:117] "RemoveContainer" containerID="a99a198e5d4442d8dc482983a4040b5b57931b43fd6c041e732dfd7e0cbf2c49" Jan 26 11:31:22 crc kubenswrapper[4619]: E0126 11:31:22.294051 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a99a198e5d4442d8dc482983a4040b5b57931b43fd6c041e732dfd7e0cbf2c49\": container with ID starting with a99a198e5d4442d8dc482983a4040b5b57931b43fd6c041e732dfd7e0cbf2c49 not found: ID does not exist" containerID="a99a198e5d4442d8dc482983a4040b5b57931b43fd6c041e732dfd7e0cbf2c49" Jan 26 11:31:22 crc kubenswrapper[4619]: I0126 11:31:22.294076 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a99a198e5d4442d8dc482983a4040b5b57931b43fd6c041e732dfd7e0cbf2c49"} err="failed to get container status \"a99a198e5d4442d8dc482983a4040b5b57931b43fd6c041e732dfd7e0cbf2c49\": rpc error: code = NotFound desc = could not find container \"a99a198e5d4442d8dc482983a4040b5b57931b43fd6c041e732dfd7e0cbf2c49\": container with ID starting with a99a198e5d4442d8dc482983a4040b5b57931b43fd6c041e732dfd7e0cbf2c49 not found: ID does not exist" Jan 26 11:31:22 crc kubenswrapper[4619]: I0126 11:31:22.294091 4619 scope.go:117] "RemoveContainer" containerID="452664cc984277e1ed3545ea673c54190cf641c0a4ddd8188cdafef852f3ff3a" Jan 26 11:31:22 crc kubenswrapper[4619]: E0126 11:31:22.294470 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"452664cc984277e1ed3545ea673c54190cf641c0a4ddd8188cdafef852f3ff3a\": container with ID starting with 452664cc984277e1ed3545ea673c54190cf641c0a4ddd8188cdafef852f3ff3a not found: ID does not exist" containerID="452664cc984277e1ed3545ea673c54190cf641c0a4ddd8188cdafef852f3ff3a" Jan 26 11:31:22 crc kubenswrapper[4619]: I0126 11:31:22.294498 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"452664cc984277e1ed3545ea673c54190cf641c0a4ddd8188cdafef852f3ff3a"} err="failed to get container status \"452664cc984277e1ed3545ea673c54190cf641c0a4ddd8188cdafef852f3ff3a\": rpc error: code = NotFound desc = could not find container \"452664cc984277e1ed3545ea673c54190cf641c0a4ddd8188cdafef852f3ff3a\": container with ID starting with 452664cc984277e1ed3545ea673c54190cf641c0a4ddd8188cdafef852f3ff3a not found: ID does not exist" Jan 26 11:31:23 crc kubenswrapper[4619]: I0126 11:31:23.276921 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10989d81-82a2-4022-83e2-defddced5fe2" path="/var/lib/kubelet/pods/10989d81-82a2-4022-83e2-defddced5fe2/volumes" Jan 26 11:31:23 crc kubenswrapper[4619]: I0126 11:31:23.278503 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="410f6ed9-1294-4895-ad81-1b06f889eab4" path="/var/lib/kubelet/pods/410f6ed9-1294-4895-ad81-1b06f889eab4/volumes" Jan 26 11:31:24 crc kubenswrapper[4619]: I0126 11:31:24.949417 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-88ft6"] Jan 26 11:31:24 crc kubenswrapper[4619]: I0126 11:31:24.949994 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-88ft6" podUID="a2653625-569d-4e61-8cd4-9107edbf7952" containerName="registry-server" containerID="cri-o://3092ff9d44f500aedfa10d7ad959eb107b5cb9afba411f6961e99889c90fb6dd" gracePeriod=2 Jan 26 11:31:25 crc kubenswrapper[4619]: I0126 11:31:25.239078 4619 generic.go:334] "Generic (PLEG): container finished" podID="a2653625-569d-4e61-8cd4-9107edbf7952" containerID="3092ff9d44f500aedfa10d7ad959eb107b5cb9afba411f6961e99889c90fb6dd" exitCode=0 Jan 26 11:31:25 crc kubenswrapper[4619]: I0126 11:31:25.239186 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-88ft6" event={"ID":"a2653625-569d-4e61-8cd4-9107edbf7952","Type":"ContainerDied","Data":"3092ff9d44f500aedfa10d7ad959eb107b5cb9afba411f6961e99889c90fb6dd"} Jan 26 11:31:25 crc kubenswrapper[4619]: I0126 11:31:25.485133 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-88ft6" Jan 26 11:31:25 crc kubenswrapper[4619]: I0126 11:31:25.622809 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2653625-569d-4e61-8cd4-9107edbf7952-utilities\") pod \"a2653625-569d-4e61-8cd4-9107edbf7952\" (UID: \"a2653625-569d-4e61-8cd4-9107edbf7952\") " Jan 26 11:31:25 crc kubenswrapper[4619]: I0126 11:31:25.622886 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mvr2\" (UniqueName: \"kubernetes.io/projected/a2653625-569d-4e61-8cd4-9107edbf7952-kube-api-access-5mvr2\") pod \"a2653625-569d-4e61-8cd4-9107edbf7952\" (UID: \"a2653625-569d-4e61-8cd4-9107edbf7952\") " Jan 26 11:31:25 crc kubenswrapper[4619]: I0126 11:31:25.623518 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2653625-569d-4e61-8cd4-9107edbf7952-utilities" (OuterVolumeSpecName: "utilities") pod "a2653625-569d-4e61-8cd4-9107edbf7952" (UID: "a2653625-569d-4e61-8cd4-9107edbf7952"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:31:25 crc kubenswrapper[4619]: I0126 11:31:25.623746 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2653625-569d-4e61-8cd4-9107edbf7952-catalog-content\") pod \"a2653625-569d-4e61-8cd4-9107edbf7952\" (UID: \"a2653625-569d-4e61-8cd4-9107edbf7952\") " Jan 26 11:31:25 crc kubenswrapper[4619]: I0126 11:31:25.624602 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2653625-569d-4e61-8cd4-9107edbf7952-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:31:25 crc kubenswrapper[4619]: I0126 11:31:25.645771 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2653625-569d-4e61-8cd4-9107edbf7952-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a2653625-569d-4e61-8cd4-9107edbf7952" (UID: "a2653625-569d-4e61-8cd4-9107edbf7952"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:31:25 crc kubenswrapper[4619]: I0126 11:31:25.646157 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2653625-569d-4e61-8cd4-9107edbf7952-kube-api-access-5mvr2" (OuterVolumeSpecName: "kube-api-access-5mvr2") pod "a2653625-569d-4e61-8cd4-9107edbf7952" (UID: "a2653625-569d-4e61-8cd4-9107edbf7952"). InnerVolumeSpecName "kube-api-access-5mvr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:31:25 crc kubenswrapper[4619]: I0126 11:31:25.727682 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2653625-569d-4e61-8cd4-9107edbf7952-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:31:25 crc kubenswrapper[4619]: I0126 11:31:25.727711 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mvr2\" (UniqueName: \"kubernetes.io/projected/a2653625-569d-4e61-8cd4-9107edbf7952-kube-api-access-5mvr2\") on node \"crc\" DevicePath \"\"" Jan 26 11:31:26 crc kubenswrapper[4619]: I0126 11:31:26.248515 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-88ft6" event={"ID":"a2653625-569d-4e61-8cd4-9107edbf7952","Type":"ContainerDied","Data":"f4687637e23856d915bacb2bc59a0e946e3acd2801f1950ffb20afd5fdb34219"} Jan 26 11:31:26 crc kubenswrapper[4619]: I0126 11:31:26.248577 4619 scope.go:117] "RemoveContainer" containerID="3092ff9d44f500aedfa10d7ad959eb107b5cb9afba411f6961e99889c90fb6dd" Jan 26 11:31:26 crc kubenswrapper[4619]: I0126 11:31:26.248587 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-88ft6" Jan 26 11:31:26 crc kubenswrapper[4619]: I0126 11:31:26.269068 4619 scope.go:117] "RemoveContainer" containerID="e01198fe435dc6335a190dad63938d464d30fc22dad5089d3c1fc54b895b90a5" Jan 26 11:31:26 crc kubenswrapper[4619]: I0126 11:31:26.290810 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-88ft6"] Jan 26 11:31:26 crc kubenswrapper[4619]: I0126 11:31:26.292932 4619 scope.go:117] "RemoveContainer" containerID="dbc627d5a8e2ad563223b41d559cf03662bc7c5ecaa7949ee351bca6ca4b87ec" Jan 26 11:31:26 crc kubenswrapper[4619]: I0126 11:31:26.296066 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-88ft6"] Jan 26 11:31:27 crc kubenswrapper[4619]: I0126 11:31:27.270637 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2653625-569d-4e61-8cd4-9107edbf7952" path="/var/lib/kubelet/pods/a2653625-569d-4e61-8cd4-9107edbf7952/volumes" Jan 26 11:31:44 crc kubenswrapper[4619]: I0126 11:31:44.234166 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:31:44 crc kubenswrapper[4619]: I0126 11:31:44.234774 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:31:48 crc kubenswrapper[4619]: I0126 11:31:48.829664 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p6lfq"] Jan 26 11:31:48 crc kubenswrapper[4619]: E0126 11:31:48.830487 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2653625-569d-4e61-8cd4-9107edbf7952" containerName="registry-server" Jan 26 11:31:48 crc kubenswrapper[4619]: I0126 11:31:48.830499 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2653625-569d-4e61-8cd4-9107edbf7952" containerName="registry-server" Jan 26 11:31:48 crc kubenswrapper[4619]: E0126 11:31:48.830509 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410f6ed9-1294-4895-ad81-1b06f889eab4" containerName="extract-utilities" Jan 26 11:31:48 crc kubenswrapper[4619]: I0126 11:31:48.830515 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="410f6ed9-1294-4895-ad81-1b06f889eab4" containerName="extract-utilities" Jan 26 11:31:48 crc kubenswrapper[4619]: E0126 11:31:48.830529 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10989d81-82a2-4022-83e2-defddced5fe2" containerName="extract-content" Jan 26 11:31:48 crc kubenswrapper[4619]: I0126 11:31:48.830538 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="10989d81-82a2-4022-83e2-defddced5fe2" containerName="extract-content" Jan 26 11:31:48 crc kubenswrapper[4619]: E0126 11:31:48.830553 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10989d81-82a2-4022-83e2-defddced5fe2" containerName="extract-utilities" Jan 26 11:31:48 crc kubenswrapper[4619]: I0126 11:31:48.830559 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="10989d81-82a2-4022-83e2-defddced5fe2" containerName="extract-utilities" Jan 26 11:31:48 crc kubenswrapper[4619]: E0126 11:31:48.830585 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2653625-569d-4e61-8cd4-9107edbf7952" containerName="extract-utilities" Jan 26 11:31:48 crc kubenswrapper[4619]: I0126 11:31:48.830590 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2653625-569d-4e61-8cd4-9107edbf7952" containerName="extract-utilities" Jan 26 11:31:48 crc kubenswrapper[4619]: E0126 11:31:48.830602 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410f6ed9-1294-4895-ad81-1b06f889eab4" containerName="extract-content" Jan 26 11:31:48 crc kubenswrapper[4619]: I0126 11:31:48.830608 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="410f6ed9-1294-4895-ad81-1b06f889eab4" containerName="extract-content" Jan 26 11:31:48 crc kubenswrapper[4619]: E0126 11:31:48.830633 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2653625-569d-4e61-8cd4-9107edbf7952" containerName="extract-content" Jan 26 11:31:48 crc kubenswrapper[4619]: I0126 11:31:48.830640 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2653625-569d-4e61-8cd4-9107edbf7952" containerName="extract-content" Jan 26 11:31:48 crc kubenswrapper[4619]: E0126 11:31:48.830652 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410f6ed9-1294-4895-ad81-1b06f889eab4" containerName="registry-server" Jan 26 11:31:48 crc kubenswrapper[4619]: I0126 11:31:48.830658 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="410f6ed9-1294-4895-ad81-1b06f889eab4" containerName="registry-server" Jan 26 11:31:48 crc kubenswrapper[4619]: E0126 11:31:48.830670 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10989d81-82a2-4022-83e2-defddced5fe2" containerName="registry-server" Jan 26 11:31:48 crc kubenswrapper[4619]: I0126 11:31:48.830676 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="10989d81-82a2-4022-83e2-defddced5fe2" containerName="registry-server" Jan 26 11:31:48 crc kubenswrapper[4619]: I0126 11:31:48.830830 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2653625-569d-4e61-8cd4-9107edbf7952" containerName="registry-server" Jan 26 11:31:48 crc kubenswrapper[4619]: I0126 11:31:48.830847 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="410f6ed9-1294-4895-ad81-1b06f889eab4" containerName="registry-server" Jan 26 11:31:48 crc kubenswrapper[4619]: I0126 11:31:48.830869 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="10989d81-82a2-4022-83e2-defddced5fe2" containerName="registry-server" Jan 26 11:31:48 crc kubenswrapper[4619]: I0126 11:31:48.833004 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p6lfq" Jan 26 11:31:48 crc kubenswrapper[4619]: I0126 11:31:48.854751 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p6lfq"] Jan 26 11:31:48 crc kubenswrapper[4619]: I0126 11:31:48.934072 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmn5d\" (UniqueName: \"kubernetes.io/projected/6f6eecb7-5cf7-474c-8839-22c3cf031a88-kube-api-access-fmn5d\") pod \"certified-operators-p6lfq\" (UID: \"6f6eecb7-5cf7-474c-8839-22c3cf031a88\") " pod="openshift-marketplace/certified-operators-p6lfq" Jan 26 11:31:48 crc kubenswrapper[4619]: I0126 11:31:48.934273 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f6eecb7-5cf7-474c-8839-22c3cf031a88-utilities\") pod \"certified-operators-p6lfq\" (UID: \"6f6eecb7-5cf7-474c-8839-22c3cf031a88\") " pod="openshift-marketplace/certified-operators-p6lfq" Jan 26 11:31:48 crc kubenswrapper[4619]: I0126 11:31:48.934417 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f6eecb7-5cf7-474c-8839-22c3cf031a88-catalog-content\") pod \"certified-operators-p6lfq\" (UID: \"6f6eecb7-5cf7-474c-8839-22c3cf031a88\") " pod="openshift-marketplace/certified-operators-p6lfq" Jan 26 11:31:49 crc kubenswrapper[4619]: I0126 11:31:49.036043 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmn5d\" (UniqueName: \"kubernetes.io/projected/6f6eecb7-5cf7-474c-8839-22c3cf031a88-kube-api-access-fmn5d\") pod \"certified-operators-p6lfq\" (UID: \"6f6eecb7-5cf7-474c-8839-22c3cf031a88\") " pod="openshift-marketplace/certified-operators-p6lfq" Jan 26 11:31:49 crc kubenswrapper[4619]: I0126 11:31:49.036132 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f6eecb7-5cf7-474c-8839-22c3cf031a88-utilities\") pod \"certified-operators-p6lfq\" (UID: \"6f6eecb7-5cf7-474c-8839-22c3cf031a88\") " pod="openshift-marketplace/certified-operators-p6lfq" Jan 26 11:31:49 crc kubenswrapper[4619]: I0126 11:31:49.036208 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f6eecb7-5cf7-474c-8839-22c3cf031a88-catalog-content\") pod \"certified-operators-p6lfq\" (UID: \"6f6eecb7-5cf7-474c-8839-22c3cf031a88\") " pod="openshift-marketplace/certified-operators-p6lfq" Jan 26 11:31:49 crc kubenswrapper[4619]: I0126 11:31:49.036782 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f6eecb7-5cf7-474c-8839-22c3cf031a88-catalog-content\") pod \"certified-operators-p6lfq\" (UID: \"6f6eecb7-5cf7-474c-8839-22c3cf031a88\") " pod="openshift-marketplace/certified-operators-p6lfq" Jan 26 11:31:49 crc kubenswrapper[4619]: I0126 11:31:49.036789 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f6eecb7-5cf7-474c-8839-22c3cf031a88-utilities\") pod \"certified-operators-p6lfq\" (UID: \"6f6eecb7-5cf7-474c-8839-22c3cf031a88\") " pod="openshift-marketplace/certified-operators-p6lfq" Jan 26 11:31:49 crc kubenswrapper[4619]: I0126 11:31:49.061809 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmn5d\" (UniqueName: \"kubernetes.io/projected/6f6eecb7-5cf7-474c-8839-22c3cf031a88-kube-api-access-fmn5d\") pod \"certified-operators-p6lfq\" (UID: \"6f6eecb7-5cf7-474c-8839-22c3cf031a88\") " pod="openshift-marketplace/certified-operators-p6lfq" Jan 26 11:31:49 crc kubenswrapper[4619]: I0126 11:31:49.156173 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p6lfq" Jan 26 11:31:49 crc kubenswrapper[4619]: I0126 11:31:49.708017 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p6lfq"] Jan 26 11:31:50 crc kubenswrapper[4619]: I0126 11:31:50.438362 4619 generic.go:334] "Generic (PLEG): container finished" podID="6f6eecb7-5cf7-474c-8839-22c3cf031a88" containerID="05d0d3ee68c1cd60cc0c17f2bc819fa031fdbafa0144e1306e3b25f960630f32" exitCode=0 Jan 26 11:31:50 crc kubenswrapper[4619]: I0126 11:31:50.438630 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6lfq" event={"ID":"6f6eecb7-5cf7-474c-8839-22c3cf031a88","Type":"ContainerDied","Data":"05d0d3ee68c1cd60cc0c17f2bc819fa031fdbafa0144e1306e3b25f960630f32"} Jan 26 11:31:50 crc kubenswrapper[4619]: I0126 11:31:50.438654 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6lfq" event={"ID":"6f6eecb7-5cf7-474c-8839-22c3cf031a88","Type":"ContainerStarted","Data":"abeb7211ad74259ec7e4059ae6ba97bb4aa6f2953c599b2eb831b090cd064edc"} Jan 26 11:31:50 crc kubenswrapper[4619]: I0126 11:31:50.443903 4619 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 11:31:51 crc kubenswrapper[4619]: I0126 11:31:51.448981 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6lfq" event={"ID":"6f6eecb7-5cf7-474c-8839-22c3cf031a88","Type":"ContainerStarted","Data":"b50ad7bbed284daa0964de45ec41040f0830e2270a1b2ce6b371973c67eaf1a1"} Jan 26 11:31:53 crc kubenswrapper[4619]: I0126 11:31:53.464935 4619 generic.go:334] "Generic (PLEG): container finished" podID="6f6eecb7-5cf7-474c-8839-22c3cf031a88" containerID="b50ad7bbed284daa0964de45ec41040f0830e2270a1b2ce6b371973c67eaf1a1" exitCode=0 Jan 26 11:31:53 crc kubenswrapper[4619]: I0126 11:31:53.465033 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6lfq" event={"ID":"6f6eecb7-5cf7-474c-8839-22c3cf031a88","Type":"ContainerDied","Data":"b50ad7bbed284daa0964de45ec41040f0830e2270a1b2ce6b371973c67eaf1a1"} Jan 26 11:31:54 crc kubenswrapper[4619]: I0126 11:31:54.477059 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6lfq" event={"ID":"6f6eecb7-5cf7-474c-8839-22c3cf031a88","Type":"ContainerStarted","Data":"22cb74f4fddbcfdb064a08ff1a0ece24f140af63bcc2575620a0b9d1ff64b772"} Jan 26 11:31:54 crc kubenswrapper[4619]: I0126 11:31:54.508794 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p6lfq" podStartSLOduration=3.121281639 podStartE2EDuration="6.508758244s" podCreationTimestamp="2026-01-26 11:31:48 +0000 UTC" firstStartedPulling="2026-01-26 11:31:50.443702545 +0000 UTC m=+2209.477743251" lastFinishedPulling="2026-01-26 11:31:53.83117914 +0000 UTC m=+2212.865219856" observedRunningTime="2026-01-26 11:31:54.505805977 +0000 UTC m=+2213.539846693" watchObservedRunningTime="2026-01-26 11:31:54.508758244 +0000 UTC m=+2213.542798970" Jan 26 11:31:59 crc kubenswrapper[4619]: I0126 11:31:59.156715 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p6lfq" Jan 26 11:31:59 crc kubenswrapper[4619]: I0126 11:31:59.157197 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p6lfq" Jan 26 11:31:59 crc kubenswrapper[4619]: I0126 11:31:59.205262 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p6lfq" Jan 26 11:31:59 crc kubenswrapper[4619]: I0126 11:31:59.557121 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p6lfq" Jan 26 11:31:59 crc kubenswrapper[4619]: I0126 11:31:59.600426 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p6lfq"] Jan 26 11:32:01 crc kubenswrapper[4619]: I0126 11:32:01.526350 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p6lfq" podUID="6f6eecb7-5cf7-474c-8839-22c3cf031a88" containerName="registry-server" containerID="cri-o://22cb74f4fddbcfdb064a08ff1a0ece24f140af63bcc2575620a0b9d1ff64b772" gracePeriod=2 Jan 26 11:32:01 crc kubenswrapper[4619]: I0126 11:32:01.997881 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p6lfq" Jan 26 11:32:02 crc kubenswrapper[4619]: I0126 11:32:02.138162 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmn5d\" (UniqueName: \"kubernetes.io/projected/6f6eecb7-5cf7-474c-8839-22c3cf031a88-kube-api-access-fmn5d\") pod \"6f6eecb7-5cf7-474c-8839-22c3cf031a88\" (UID: \"6f6eecb7-5cf7-474c-8839-22c3cf031a88\") " Jan 26 11:32:02 crc kubenswrapper[4619]: I0126 11:32:02.138318 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f6eecb7-5cf7-474c-8839-22c3cf031a88-utilities\") pod \"6f6eecb7-5cf7-474c-8839-22c3cf031a88\" (UID: \"6f6eecb7-5cf7-474c-8839-22c3cf031a88\") " Jan 26 11:32:02 crc kubenswrapper[4619]: I0126 11:32:02.138370 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f6eecb7-5cf7-474c-8839-22c3cf031a88-catalog-content\") pod \"6f6eecb7-5cf7-474c-8839-22c3cf031a88\" (UID: \"6f6eecb7-5cf7-474c-8839-22c3cf031a88\") " Jan 26 11:32:02 crc kubenswrapper[4619]: I0126 11:32:02.139223 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f6eecb7-5cf7-474c-8839-22c3cf031a88-utilities" (OuterVolumeSpecName: "utilities") pod "6f6eecb7-5cf7-474c-8839-22c3cf031a88" (UID: "6f6eecb7-5cf7-474c-8839-22c3cf031a88"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:32:02 crc kubenswrapper[4619]: I0126 11:32:02.145285 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f6eecb7-5cf7-474c-8839-22c3cf031a88-kube-api-access-fmn5d" (OuterVolumeSpecName: "kube-api-access-fmn5d") pod "6f6eecb7-5cf7-474c-8839-22c3cf031a88" (UID: "6f6eecb7-5cf7-474c-8839-22c3cf031a88"). InnerVolumeSpecName "kube-api-access-fmn5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:32:02 crc kubenswrapper[4619]: I0126 11:32:02.184188 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f6eecb7-5cf7-474c-8839-22c3cf031a88-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f6eecb7-5cf7-474c-8839-22c3cf031a88" (UID: "6f6eecb7-5cf7-474c-8839-22c3cf031a88"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:32:02 crc kubenswrapper[4619]: I0126 11:32:02.240957 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f6eecb7-5cf7-474c-8839-22c3cf031a88-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:32:02 crc kubenswrapper[4619]: I0126 11:32:02.240992 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f6eecb7-5cf7-474c-8839-22c3cf031a88-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:32:02 crc kubenswrapper[4619]: I0126 11:32:02.241006 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmn5d\" (UniqueName: \"kubernetes.io/projected/6f6eecb7-5cf7-474c-8839-22c3cf031a88-kube-api-access-fmn5d\") on node \"crc\" DevicePath \"\"" Jan 26 11:32:02 crc kubenswrapper[4619]: I0126 11:32:02.536828 4619 generic.go:334] "Generic (PLEG): container finished" podID="6f6eecb7-5cf7-474c-8839-22c3cf031a88" containerID="22cb74f4fddbcfdb064a08ff1a0ece24f140af63bcc2575620a0b9d1ff64b772" exitCode=0 Jan 26 11:32:02 crc kubenswrapper[4619]: I0126 11:32:02.536894 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p6lfq" Jan 26 11:32:02 crc kubenswrapper[4619]: I0126 11:32:02.536912 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6lfq" event={"ID":"6f6eecb7-5cf7-474c-8839-22c3cf031a88","Type":"ContainerDied","Data":"22cb74f4fddbcfdb064a08ff1a0ece24f140af63bcc2575620a0b9d1ff64b772"} Jan 26 11:32:02 crc kubenswrapper[4619]: I0126 11:32:02.538346 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6lfq" event={"ID":"6f6eecb7-5cf7-474c-8839-22c3cf031a88","Type":"ContainerDied","Data":"abeb7211ad74259ec7e4059ae6ba97bb4aa6f2953c599b2eb831b090cd064edc"} Jan 26 11:32:02 crc kubenswrapper[4619]: I0126 11:32:02.538384 4619 scope.go:117] "RemoveContainer" containerID="22cb74f4fddbcfdb064a08ff1a0ece24f140af63bcc2575620a0b9d1ff64b772" Jan 26 11:32:02 crc kubenswrapper[4619]: I0126 11:32:02.564316 4619 scope.go:117] "RemoveContainer" containerID="b50ad7bbed284daa0964de45ec41040f0830e2270a1b2ce6b371973c67eaf1a1" Jan 26 11:32:02 crc kubenswrapper[4619]: I0126 11:32:02.594431 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p6lfq"] Jan 26 11:32:02 crc kubenswrapper[4619]: I0126 11:32:02.605347 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p6lfq"] Jan 26 11:32:02 crc kubenswrapper[4619]: I0126 11:32:02.619435 4619 scope.go:117] "RemoveContainer" containerID="05d0d3ee68c1cd60cc0c17f2bc819fa031fdbafa0144e1306e3b25f960630f32" Jan 26 11:32:02 crc kubenswrapper[4619]: I0126 11:32:02.646373 4619 scope.go:117] "RemoveContainer" containerID="22cb74f4fddbcfdb064a08ff1a0ece24f140af63bcc2575620a0b9d1ff64b772" Jan 26 11:32:02 crc kubenswrapper[4619]: E0126 11:32:02.648021 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22cb74f4fddbcfdb064a08ff1a0ece24f140af63bcc2575620a0b9d1ff64b772\": container with ID starting with 22cb74f4fddbcfdb064a08ff1a0ece24f140af63bcc2575620a0b9d1ff64b772 not found: ID does not exist" containerID="22cb74f4fddbcfdb064a08ff1a0ece24f140af63bcc2575620a0b9d1ff64b772" Jan 26 11:32:02 crc kubenswrapper[4619]: I0126 11:32:02.648059 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22cb74f4fddbcfdb064a08ff1a0ece24f140af63bcc2575620a0b9d1ff64b772"} err="failed to get container status \"22cb74f4fddbcfdb064a08ff1a0ece24f140af63bcc2575620a0b9d1ff64b772\": rpc error: code = NotFound desc = could not find container \"22cb74f4fddbcfdb064a08ff1a0ece24f140af63bcc2575620a0b9d1ff64b772\": container with ID starting with 22cb74f4fddbcfdb064a08ff1a0ece24f140af63bcc2575620a0b9d1ff64b772 not found: ID does not exist" Jan 26 11:32:02 crc kubenswrapper[4619]: I0126 11:32:02.648087 4619 scope.go:117] "RemoveContainer" containerID="b50ad7bbed284daa0964de45ec41040f0830e2270a1b2ce6b371973c67eaf1a1" Jan 26 11:32:02 crc kubenswrapper[4619]: E0126 11:32:02.648311 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b50ad7bbed284daa0964de45ec41040f0830e2270a1b2ce6b371973c67eaf1a1\": container with ID starting with b50ad7bbed284daa0964de45ec41040f0830e2270a1b2ce6b371973c67eaf1a1 not found: ID does not exist" containerID="b50ad7bbed284daa0964de45ec41040f0830e2270a1b2ce6b371973c67eaf1a1" Jan 26 11:32:02 crc kubenswrapper[4619]: I0126 11:32:02.648339 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b50ad7bbed284daa0964de45ec41040f0830e2270a1b2ce6b371973c67eaf1a1"} err="failed to get container status \"b50ad7bbed284daa0964de45ec41040f0830e2270a1b2ce6b371973c67eaf1a1\": rpc error: code = NotFound desc = could not find container \"b50ad7bbed284daa0964de45ec41040f0830e2270a1b2ce6b371973c67eaf1a1\": container with ID starting with b50ad7bbed284daa0964de45ec41040f0830e2270a1b2ce6b371973c67eaf1a1 not found: ID does not exist" Jan 26 11:32:02 crc kubenswrapper[4619]: I0126 11:32:02.648357 4619 scope.go:117] "RemoveContainer" containerID="05d0d3ee68c1cd60cc0c17f2bc819fa031fdbafa0144e1306e3b25f960630f32" Jan 26 11:32:02 crc kubenswrapper[4619]: E0126 11:32:02.648574 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05d0d3ee68c1cd60cc0c17f2bc819fa031fdbafa0144e1306e3b25f960630f32\": container with ID starting with 05d0d3ee68c1cd60cc0c17f2bc819fa031fdbafa0144e1306e3b25f960630f32 not found: ID does not exist" containerID="05d0d3ee68c1cd60cc0c17f2bc819fa031fdbafa0144e1306e3b25f960630f32" Jan 26 11:32:02 crc kubenswrapper[4619]: I0126 11:32:02.648601 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05d0d3ee68c1cd60cc0c17f2bc819fa031fdbafa0144e1306e3b25f960630f32"} err="failed to get container status \"05d0d3ee68c1cd60cc0c17f2bc819fa031fdbafa0144e1306e3b25f960630f32\": rpc error: code = NotFound desc = could not find container \"05d0d3ee68c1cd60cc0c17f2bc819fa031fdbafa0144e1306e3b25f960630f32\": container with ID starting with 05d0d3ee68c1cd60cc0c17f2bc819fa031fdbafa0144e1306e3b25f960630f32 not found: ID does not exist" Jan 26 11:32:03 crc kubenswrapper[4619]: I0126 11:32:03.273472 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f6eecb7-5cf7-474c-8839-22c3cf031a88" path="/var/lib/kubelet/pods/6f6eecb7-5cf7-474c-8839-22c3cf031a88/volumes" Jan 26 11:32:13 crc kubenswrapper[4619]: I0126 11:32:13.630229 4619 generic.go:334] "Generic (PLEG): container finished" podID="5f530175-ddda-4a1c-a437-af3747bb0da9" containerID="81f0300a68a4dd4d2ec624c9b27c24c48454bf41afae9efa3ddc6d201d749022" exitCode=0 Jan 26 11:32:13 crc kubenswrapper[4619]: I0126 11:32:13.630317 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6" event={"ID":"5f530175-ddda-4a1c-a437-af3747bb0da9","Type":"ContainerDied","Data":"81f0300a68a4dd4d2ec624c9b27c24c48454bf41afae9efa3ddc6d201d749022"} Jan 26 11:32:14 crc kubenswrapper[4619]: I0126 11:32:14.234406 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:32:14 crc kubenswrapper[4619]: I0126 11:32:14.234507 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.030867 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.083059 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5f530175-ddda-4a1c-a437-af3747bb0da9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"5f530175-ddda-4a1c-a437-af3747bb0da9\" (UID: \"5f530175-ddda-4a1c-a437-af3747bb0da9\") " Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.083142 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkvdv\" (UniqueName: \"kubernetes.io/projected/5f530175-ddda-4a1c-a437-af3747bb0da9-kube-api-access-rkvdv\") pod \"5f530175-ddda-4a1c-a437-af3747bb0da9\" (UID: \"5f530175-ddda-4a1c-a437-af3747bb0da9\") " Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.083184 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5f530175-ddda-4a1c-a437-af3747bb0da9-ssh-key-openstack-edpm-ipam\") pod \"5f530175-ddda-4a1c-a437-af3747bb0da9\" (UID: \"5f530175-ddda-4a1c-a437-af3747bb0da9\") " Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.083280 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5f530175-ddda-4a1c-a437-af3747bb0da9-nova-metadata-neutron-config-0\") pod \"5f530175-ddda-4a1c-a437-af3747bb0da9\" (UID: \"5f530175-ddda-4a1c-a437-af3747bb0da9\") " Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.083321 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f530175-ddda-4a1c-a437-af3747bb0da9-neutron-metadata-combined-ca-bundle\") pod \"5f530175-ddda-4a1c-a437-af3747bb0da9\" (UID: \"5f530175-ddda-4a1c-a437-af3747bb0da9\") " Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.083360 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f530175-ddda-4a1c-a437-af3747bb0da9-inventory\") pod \"5f530175-ddda-4a1c-a437-af3747bb0da9\" (UID: \"5f530175-ddda-4a1c-a437-af3747bb0da9\") " Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.088792 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f530175-ddda-4a1c-a437-af3747bb0da9-kube-api-access-rkvdv" (OuterVolumeSpecName: "kube-api-access-rkvdv") pod "5f530175-ddda-4a1c-a437-af3747bb0da9" (UID: "5f530175-ddda-4a1c-a437-af3747bb0da9"). InnerVolumeSpecName "kube-api-access-rkvdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.107568 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f530175-ddda-4a1c-a437-af3747bb0da9-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "5f530175-ddda-4a1c-a437-af3747bb0da9" (UID: "5f530175-ddda-4a1c-a437-af3747bb0da9"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.116899 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f530175-ddda-4a1c-a437-af3747bb0da9-inventory" (OuterVolumeSpecName: "inventory") pod "5f530175-ddda-4a1c-a437-af3747bb0da9" (UID: "5f530175-ddda-4a1c-a437-af3747bb0da9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.121158 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f530175-ddda-4a1c-a437-af3747bb0da9-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "5f530175-ddda-4a1c-a437-af3747bb0da9" (UID: "5f530175-ddda-4a1c-a437-af3747bb0da9"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.125905 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f530175-ddda-4a1c-a437-af3747bb0da9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5f530175-ddda-4a1c-a437-af3747bb0da9" (UID: "5f530175-ddda-4a1c-a437-af3747bb0da9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.143192 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f530175-ddda-4a1c-a437-af3747bb0da9-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "5f530175-ddda-4a1c-a437-af3747bb0da9" (UID: "5f530175-ddda-4a1c-a437-af3747bb0da9"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.186191 4619 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5f530175-ddda-4a1c-a437-af3747bb0da9-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.186583 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkvdv\" (UniqueName: \"kubernetes.io/projected/5f530175-ddda-4a1c-a437-af3747bb0da9-kube-api-access-rkvdv\") on node \"crc\" DevicePath \"\"" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.186697 4619 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5f530175-ddda-4a1c-a437-af3747bb0da9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.186765 4619 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5f530175-ddda-4a1c-a437-af3747bb0da9-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.186822 4619 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f530175-ddda-4a1c-a437-af3747bb0da9-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.186900 4619 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f530175-ddda-4a1c-a437-af3747bb0da9-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.649515 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6" event={"ID":"5f530175-ddda-4a1c-a437-af3747bb0da9","Type":"ContainerDied","Data":"c72fe162160a55b2bbfa0063ec5bdbe1a7a057a92a3c06344387293ee0946619"} Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.649553 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c72fe162160a55b2bbfa0063ec5bdbe1a7a057a92a3c06344387293ee0946619" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.649654 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.775893 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h97ld"] Jan 26 11:32:15 crc kubenswrapper[4619]: E0126 11:32:15.776729 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f6eecb7-5cf7-474c-8839-22c3cf031a88" containerName="extract-utilities" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.776754 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f6eecb7-5cf7-474c-8839-22c3cf031a88" containerName="extract-utilities" Jan 26 11:32:15 crc kubenswrapper[4619]: E0126 11:32:15.776768 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f530175-ddda-4a1c-a437-af3747bb0da9" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.776778 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f530175-ddda-4a1c-a437-af3747bb0da9" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 11:32:15 crc kubenswrapper[4619]: E0126 11:32:15.776800 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f6eecb7-5cf7-474c-8839-22c3cf031a88" containerName="extract-content" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.776807 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f6eecb7-5cf7-474c-8839-22c3cf031a88" containerName="extract-content" Jan 26 11:32:15 crc kubenswrapper[4619]: E0126 11:32:15.776823 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f6eecb7-5cf7-474c-8839-22c3cf031a88" containerName="registry-server" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.776831 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f6eecb7-5cf7-474c-8839-22c3cf031a88" containerName="registry-server" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.777044 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f6eecb7-5cf7-474c-8839-22c3cf031a88" containerName="registry-server" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.777076 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f530175-ddda-4a1c-a437-af3747bb0da9" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.777821 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h97ld" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.784204 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fn84q" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.784461 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.784606 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.784873 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.785024 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.793110 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h97ld"] Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.798726 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaa2c414-823b-48a9-a59d-1f02d1708f9f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h97ld\" (UID: \"eaa2c414-823b-48a9-a59d-1f02d1708f9f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h97ld" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.798903 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eaa2c414-823b-48a9-a59d-1f02d1708f9f-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h97ld\" (UID: \"eaa2c414-823b-48a9-a59d-1f02d1708f9f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h97ld" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.799019 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsr6p\" (UniqueName: \"kubernetes.io/projected/eaa2c414-823b-48a9-a59d-1f02d1708f9f-kube-api-access-bsr6p\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h97ld\" (UID: \"eaa2c414-823b-48a9-a59d-1f02d1708f9f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h97ld" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.799151 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa2c414-823b-48a9-a59d-1f02d1708f9f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h97ld\" (UID: \"eaa2c414-823b-48a9-a59d-1f02d1708f9f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h97ld" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.799275 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/eaa2c414-823b-48a9-a59d-1f02d1708f9f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h97ld\" (UID: \"eaa2c414-823b-48a9-a59d-1f02d1708f9f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h97ld" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.901645 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/eaa2c414-823b-48a9-a59d-1f02d1708f9f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h97ld\" (UID: \"eaa2c414-823b-48a9-a59d-1f02d1708f9f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h97ld" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.901753 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaa2c414-823b-48a9-a59d-1f02d1708f9f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h97ld\" (UID: \"eaa2c414-823b-48a9-a59d-1f02d1708f9f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h97ld" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.901796 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eaa2c414-823b-48a9-a59d-1f02d1708f9f-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h97ld\" (UID: \"eaa2c414-823b-48a9-a59d-1f02d1708f9f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h97ld" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.901858 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsr6p\" (UniqueName: \"kubernetes.io/projected/eaa2c414-823b-48a9-a59d-1f02d1708f9f-kube-api-access-bsr6p\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h97ld\" (UID: \"eaa2c414-823b-48a9-a59d-1f02d1708f9f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h97ld" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.901888 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa2c414-823b-48a9-a59d-1f02d1708f9f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h97ld\" (UID: \"eaa2c414-823b-48a9-a59d-1f02d1708f9f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h97ld" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.908277 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eaa2c414-823b-48a9-a59d-1f02d1708f9f-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h97ld\" (UID: \"eaa2c414-823b-48a9-a59d-1f02d1708f9f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h97ld" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.908766 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaa2c414-823b-48a9-a59d-1f02d1708f9f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h97ld\" (UID: \"eaa2c414-823b-48a9-a59d-1f02d1708f9f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h97ld" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.917857 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/eaa2c414-823b-48a9-a59d-1f02d1708f9f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h97ld\" (UID: \"eaa2c414-823b-48a9-a59d-1f02d1708f9f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h97ld" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.918895 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa2c414-823b-48a9-a59d-1f02d1708f9f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h97ld\" (UID: \"eaa2c414-823b-48a9-a59d-1f02d1708f9f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h97ld" Jan 26 11:32:15 crc kubenswrapper[4619]: I0126 11:32:15.919456 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsr6p\" (UniqueName: \"kubernetes.io/projected/eaa2c414-823b-48a9-a59d-1f02d1708f9f-kube-api-access-bsr6p\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h97ld\" (UID: \"eaa2c414-823b-48a9-a59d-1f02d1708f9f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h97ld" Jan 26 11:32:16 crc kubenswrapper[4619]: I0126 11:32:16.157441 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h97ld" Jan 26 11:32:16 crc kubenswrapper[4619]: I0126 11:32:16.684030 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h97ld"] Jan 26 11:32:17 crc kubenswrapper[4619]: I0126 11:32:17.666587 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h97ld" event={"ID":"eaa2c414-823b-48a9-a59d-1f02d1708f9f","Type":"ContainerStarted","Data":"659744aa1a84c84febd46f9d70681f292b4a8be09307840acbc2c044e9db0d06"} Jan 26 11:32:17 crc kubenswrapper[4619]: I0126 11:32:17.666934 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h97ld" event={"ID":"eaa2c414-823b-48a9-a59d-1f02d1708f9f","Type":"ContainerStarted","Data":"68435a930ff71bdf32e9ffdfe277af0bf13f3ff3d08cdef2b78a564eb81780f5"} Jan 26 11:32:17 crc kubenswrapper[4619]: I0126 11:32:17.693919 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h97ld" podStartSLOduration=2.214258248 podStartE2EDuration="2.69389589s" podCreationTimestamp="2026-01-26 11:32:15 +0000 UTC" firstStartedPulling="2026-01-26 11:32:16.695393859 +0000 UTC m=+2235.729434575" lastFinishedPulling="2026-01-26 11:32:17.175031511 +0000 UTC m=+2236.209072217" observedRunningTime="2026-01-26 11:32:17.689104786 +0000 UTC m=+2236.723145502" watchObservedRunningTime="2026-01-26 11:32:17.69389589 +0000 UTC m=+2236.727936606" Jan 26 11:32:44 crc kubenswrapper[4619]: I0126 11:32:44.234244 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:32:44 crc kubenswrapper[4619]: I0126 11:32:44.236241 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:32:44 crc kubenswrapper[4619]: I0126 11:32:44.236393 4619 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" Jan 26 11:32:44 crc kubenswrapper[4619]: I0126 11:32:44.237242 4619 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"47a8165b1f28a2eae2faa552439f23be0c102480ff1ffbb9d2b68f383bce0e1b"} pod="openshift-machine-config-operator/machine-config-daemon-28hd4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 11:32:44 crc kubenswrapper[4619]: I0126 11:32:44.237380 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" containerID="cri-o://47a8165b1f28a2eae2faa552439f23be0c102480ff1ffbb9d2b68f383bce0e1b" gracePeriod=600 Jan 26 11:32:44 crc kubenswrapper[4619]: E0126 11:32:44.431139 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:32:44 crc kubenswrapper[4619]: I0126 11:32:44.907913 4619 generic.go:334] "Generic (PLEG): container finished" podID="f33a41bb-6406-4c73-8024-4acd72817832" containerID="47a8165b1f28a2eae2faa552439f23be0c102480ff1ffbb9d2b68f383bce0e1b" exitCode=0 Jan 26 11:32:44 crc kubenswrapper[4619]: I0126 11:32:44.907964 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerDied","Data":"47a8165b1f28a2eae2faa552439f23be0c102480ff1ffbb9d2b68f383bce0e1b"} Jan 26 11:32:44 crc kubenswrapper[4619]: I0126 11:32:44.908003 4619 scope.go:117] "RemoveContainer" containerID="48466c6ecf3b810bf1e304c5501f651d0eed5c6b8b657b951b311faf79acbb89" Jan 26 11:32:44 crc kubenswrapper[4619]: I0126 11:32:44.908711 4619 scope.go:117] "RemoveContainer" containerID="47a8165b1f28a2eae2faa552439f23be0c102480ff1ffbb9d2b68f383bce0e1b" Jan 26 11:32:44 crc kubenswrapper[4619]: E0126 11:32:44.909072 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:32:56 crc kubenswrapper[4619]: I0126 11:32:56.261779 4619 scope.go:117] "RemoveContainer" containerID="47a8165b1f28a2eae2faa552439f23be0c102480ff1ffbb9d2b68f383bce0e1b" Jan 26 11:32:56 crc kubenswrapper[4619]: E0126 11:32:56.263788 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:33:07 crc kubenswrapper[4619]: I0126 11:33:07.261466 4619 scope.go:117] "RemoveContainer" containerID="47a8165b1f28a2eae2faa552439f23be0c102480ff1ffbb9d2b68f383bce0e1b" Jan 26 11:33:07 crc kubenswrapper[4619]: E0126 11:33:07.262070 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:33:18 crc kubenswrapper[4619]: I0126 11:33:18.260961 4619 scope.go:117] "RemoveContainer" containerID="47a8165b1f28a2eae2faa552439f23be0c102480ff1ffbb9d2b68f383bce0e1b" Jan 26 11:33:18 crc kubenswrapper[4619]: E0126 11:33:18.261648 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:33:29 crc kubenswrapper[4619]: I0126 11:33:29.261382 4619 scope.go:117] "RemoveContainer" containerID="47a8165b1f28a2eae2faa552439f23be0c102480ff1ffbb9d2b68f383bce0e1b" Jan 26 11:33:29 crc kubenswrapper[4619]: E0126 11:33:29.262266 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:33:40 crc kubenswrapper[4619]: I0126 11:33:40.261315 4619 scope.go:117] "RemoveContainer" containerID="47a8165b1f28a2eae2faa552439f23be0c102480ff1ffbb9d2b68f383bce0e1b" Jan 26 11:33:40 crc kubenswrapper[4619]: E0126 11:33:40.263070 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:33:55 crc kubenswrapper[4619]: I0126 11:33:55.262566 4619 scope.go:117] "RemoveContainer" containerID="47a8165b1f28a2eae2faa552439f23be0c102480ff1ffbb9d2b68f383bce0e1b" Jan 26 11:33:55 crc kubenswrapper[4619]: E0126 11:33:55.263366 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:34:07 crc kubenswrapper[4619]: I0126 11:34:07.261795 4619 scope.go:117] "RemoveContainer" containerID="47a8165b1f28a2eae2faa552439f23be0c102480ff1ffbb9d2b68f383bce0e1b" Jan 26 11:34:07 crc kubenswrapper[4619]: E0126 11:34:07.262505 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:34:19 crc kubenswrapper[4619]: I0126 11:34:19.261814 4619 scope.go:117] "RemoveContainer" containerID="47a8165b1f28a2eae2faa552439f23be0c102480ff1ffbb9d2b68f383bce0e1b" Jan 26 11:34:19 crc kubenswrapper[4619]: E0126 11:34:19.262798 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:34:30 crc kubenswrapper[4619]: I0126 11:34:30.261678 4619 scope.go:117] "RemoveContainer" containerID="47a8165b1f28a2eae2faa552439f23be0c102480ff1ffbb9d2b68f383bce0e1b" Jan 26 11:34:30 crc kubenswrapper[4619]: E0126 11:34:30.262250 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:34:41 crc kubenswrapper[4619]: I0126 11:34:41.267575 4619 scope.go:117] "RemoveContainer" containerID="47a8165b1f28a2eae2faa552439f23be0c102480ff1ffbb9d2b68f383bce0e1b" Jan 26 11:34:41 crc kubenswrapper[4619]: E0126 11:34:41.268305 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:34:56 crc kubenswrapper[4619]: I0126 11:34:56.260537 4619 scope.go:117] "RemoveContainer" containerID="47a8165b1f28a2eae2faa552439f23be0c102480ff1ffbb9d2b68f383bce0e1b" Jan 26 11:34:56 crc kubenswrapper[4619]: E0126 11:34:56.261966 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:35:09 crc kubenswrapper[4619]: I0126 11:35:09.261870 4619 scope.go:117] "RemoveContainer" containerID="47a8165b1f28a2eae2faa552439f23be0c102480ff1ffbb9d2b68f383bce0e1b" Jan 26 11:35:09 crc kubenswrapper[4619]: E0126 11:35:09.264355 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:35:22 crc kubenswrapper[4619]: I0126 11:35:22.262319 4619 scope.go:117] "RemoveContainer" containerID="47a8165b1f28a2eae2faa552439f23be0c102480ff1ffbb9d2b68f383bce0e1b" Jan 26 11:35:22 crc kubenswrapper[4619]: E0126 11:35:22.263157 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:35:36 crc kubenswrapper[4619]: I0126 11:35:36.261144 4619 scope.go:117] "RemoveContainer" containerID="47a8165b1f28a2eae2faa552439f23be0c102480ff1ffbb9d2b68f383bce0e1b" Jan 26 11:35:36 crc kubenswrapper[4619]: E0126 11:35:36.262126 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:35:51 crc kubenswrapper[4619]: I0126 11:35:51.269447 4619 scope.go:117] "RemoveContainer" containerID="47a8165b1f28a2eae2faa552439f23be0c102480ff1ffbb9d2b68f383bce0e1b" Jan 26 11:35:51 crc kubenswrapper[4619]: E0126 11:35:51.270639 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:36:02 crc kubenswrapper[4619]: I0126 11:36:02.261901 4619 scope.go:117] "RemoveContainer" containerID="47a8165b1f28a2eae2faa552439f23be0c102480ff1ffbb9d2b68f383bce0e1b" Jan 26 11:36:02 crc kubenswrapper[4619]: E0126 11:36:02.262541 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:36:17 crc kubenswrapper[4619]: I0126 11:36:17.261951 4619 scope.go:117] "RemoveContainer" containerID="47a8165b1f28a2eae2faa552439f23be0c102480ff1ffbb9d2b68f383bce0e1b" Jan 26 11:36:17 crc kubenswrapper[4619]: E0126 11:36:17.264246 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:36:28 crc kubenswrapper[4619]: I0126 11:36:28.260873 4619 scope.go:117] "RemoveContainer" containerID="47a8165b1f28a2eae2faa552439f23be0c102480ff1ffbb9d2b68f383bce0e1b" Jan 26 11:36:28 crc kubenswrapper[4619]: E0126 11:36:28.261669 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:36:40 crc kubenswrapper[4619]: I0126 11:36:40.261892 4619 scope.go:117] "RemoveContainer" containerID="47a8165b1f28a2eae2faa552439f23be0c102480ff1ffbb9d2b68f383bce0e1b" Jan 26 11:36:40 crc kubenswrapper[4619]: E0126 11:36:40.262834 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:36:54 crc kubenswrapper[4619]: I0126 11:36:54.261659 4619 scope.go:117] "RemoveContainer" containerID="47a8165b1f28a2eae2faa552439f23be0c102480ff1ffbb9d2b68f383bce0e1b" Jan 26 11:36:54 crc kubenswrapper[4619]: E0126 11:36:54.262589 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:36:59 crc kubenswrapper[4619]: I0126 11:36:59.335574 4619 generic.go:334] "Generic (PLEG): container finished" podID="eaa2c414-823b-48a9-a59d-1f02d1708f9f" containerID="659744aa1a84c84febd46f9d70681f292b4a8be09307840acbc2c044e9db0d06" exitCode=0 Jan 26 11:36:59 crc kubenswrapper[4619]: I0126 11:36:59.335695 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h97ld" event={"ID":"eaa2c414-823b-48a9-a59d-1f02d1708f9f","Type":"ContainerDied","Data":"659744aa1a84c84febd46f9d70681f292b4a8be09307840acbc2c044e9db0d06"} Jan 26 11:37:00 crc kubenswrapper[4619]: I0126 11:37:00.823053 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h97ld" Jan 26 11:37:00 crc kubenswrapper[4619]: I0126 11:37:00.995609 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa2c414-823b-48a9-a59d-1f02d1708f9f-libvirt-combined-ca-bundle\") pod \"eaa2c414-823b-48a9-a59d-1f02d1708f9f\" (UID: \"eaa2c414-823b-48a9-a59d-1f02d1708f9f\") " Jan 26 11:37:00 crc kubenswrapper[4619]: I0126 11:37:00.995853 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaa2c414-823b-48a9-a59d-1f02d1708f9f-inventory\") pod \"eaa2c414-823b-48a9-a59d-1f02d1708f9f\" (UID: \"eaa2c414-823b-48a9-a59d-1f02d1708f9f\") " Jan 26 11:37:00 crc kubenswrapper[4619]: I0126 11:37:00.995908 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eaa2c414-823b-48a9-a59d-1f02d1708f9f-ssh-key-openstack-edpm-ipam\") pod \"eaa2c414-823b-48a9-a59d-1f02d1708f9f\" (UID: \"eaa2c414-823b-48a9-a59d-1f02d1708f9f\") " Jan 26 11:37:00 crc kubenswrapper[4619]: I0126 11:37:00.996054 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/eaa2c414-823b-48a9-a59d-1f02d1708f9f-libvirt-secret-0\") pod \"eaa2c414-823b-48a9-a59d-1f02d1708f9f\" (UID: \"eaa2c414-823b-48a9-a59d-1f02d1708f9f\") " Jan 26 11:37:00 crc kubenswrapper[4619]: I0126 11:37:00.996087 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsr6p\" (UniqueName: \"kubernetes.io/projected/eaa2c414-823b-48a9-a59d-1f02d1708f9f-kube-api-access-bsr6p\") pod \"eaa2c414-823b-48a9-a59d-1f02d1708f9f\" (UID: \"eaa2c414-823b-48a9-a59d-1f02d1708f9f\") " Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.001950 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa2c414-823b-48a9-a59d-1f02d1708f9f-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "eaa2c414-823b-48a9-a59d-1f02d1708f9f" (UID: "eaa2c414-823b-48a9-a59d-1f02d1708f9f"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.004220 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaa2c414-823b-48a9-a59d-1f02d1708f9f-kube-api-access-bsr6p" (OuterVolumeSpecName: "kube-api-access-bsr6p") pod "eaa2c414-823b-48a9-a59d-1f02d1708f9f" (UID: "eaa2c414-823b-48a9-a59d-1f02d1708f9f"). InnerVolumeSpecName "kube-api-access-bsr6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.027297 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa2c414-823b-48a9-a59d-1f02d1708f9f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "eaa2c414-823b-48a9-a59d-1f02d1708f9f" (UID: "eaa2c414-823b-48a9-a59d-1f02d1708f9f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.029468 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa2c414-823b-48a9-a59d-1f02d1708f9f-inventory" (OuterVolumeSpecName: "inventory") pod "eaa2c414-823b-48a9-a59d-1f02d1708f9f" (UID: "eaa2c414-823b-48a9-a59d-1f02d1708f9f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.031453 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa2c414-823b-48a9-a59d-1f02d1708f9f-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "eaa2c414-823b-48a9-a59d-1f02d1708f9f" (UID: "eaa2c414-823b-48a9-a59d-1f02d1708f9f"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.102991 4619 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/eaa2c414-823b-48a9-a59d-1f02d1708f9f-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.103031 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsr6p\" (UniqueName: \"kubernetes.io/projected/eaa2c414-823b-48a9-a59d-1f02d1708f9f-kube-api-access-bsr6p\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.103043 4619 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa2c414-823b-48a9-a59d-1f02d1708f9f-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.103053 4619 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaa2c414-823b-48a9-a59d-1f02d1708f9f-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.103061 4619 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eaa2c414-823b-48a9-a59d-1f02d1708f9f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.354765 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h97ld" event={"ID":"eaa2c414-823b-48a9-a59d-1f02d1708f9f","Type":"ContainerDied","Data":"68435a930ff71bdf32e9ffdfe277af0bf13f3ff3d08cdef2b78a564eb81780f5"} Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.354817 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68435a930ff71bdf32e9ffdfe277af0bf13f3ff3d08cdef2b78a564eb81780f5" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.354855 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h97ld" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.463247 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng"] Jan 26 11:37:01 crc kubenswrapper[4619]: E0126 11:37:01.463602 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa2c414-823b-48a9-a59d-1f02d1708f9f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.463636 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa2c414-823b-48a9-a59d-1f02d1708f9f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.463831 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaa2c414-823b-48a9-a59d-1f02d1708f9f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.464497 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.467063 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.467151 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.467538 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.467736 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.468017 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fn84q" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.468338 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.469725 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.476262 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng"] Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.521226 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjjng\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.521496 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjjng\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.521581 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjjng\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.521746 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjjng\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.521795 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjjng\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.521885 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjjng\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.521935 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9kxr\" (UniqueName: \"kubernetes.io/projected/b641ed88-2b99-4794-a48d-906d2355417d-kube-api-access-d9kxr\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjjng\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.521957 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b641ed88-2b99-4794-a48d-906d2355417d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjjng\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.522016 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjjng\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.623757 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjjng\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.623844 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjjng\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.623870 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjjng\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.623894 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjjng\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.623935 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjjng\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.624093 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9kxr\" (UniqueName: \"kubernetes.io/projected/b641ed88-2b99-4794-a48d-906d2355417d-kube-api-access-d9kxr\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjjng\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.624884 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b641ed88-2b99-4794-a48d-906d2355417d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjjng\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.624933 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjjng\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.624972 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjjng\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.625049 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b641ed88-2b99-4794-a48d-906d2355417d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjjng\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.628415 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjjng\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.628541 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjjng\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.628866 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjjng\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.630258 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjjng\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.633910 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjjng\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.641251 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjjng\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.642439 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjjng\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.646391 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9kxr\" (UniqueName: \"kubernetes.io/projected/b641ed88-2b99-4794-a48d-906d2355417d-kube-api-access-d9kxr\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xjjng\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" Jan 26 11:37:01 crc kubenswrapper[4619]: I0126 11:37:01.782692 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" Jan 26 11:37:02 crc kubenswrapper[4619]: I0126 11:37:02.368316 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng"] Jan 26 11:37:02 crc kubenswrapper[4619]: I0126 11:37:02.378671 4619 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 11:37:03 crc kubenswrapper[4619]: I0126 11:37:03.371504 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" event={"ID":"b641ed88-2b99-4794-a48d-906d2355417d","Type":"ContainerStarted","Data":"4cef260c79f8175e3cff84eb1a40e54dd1364b2977139a28de2bbae2e25add3f"} Jan 26 11:37:03 crc kubenswrapper[4619]: I0126 11:37:03.372229 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" event={"ID":"b641ed88-2b99-4794-a48d-906d2355417d","Type":"ContainerStarted","Data":"1feec97b55643378084f67444204b6e045069385bdf72d5bbc2346b4ac32a5f9"} Jan 26 11:37:03 crc kubenswrapper[4619]: I0126 11:37:03.391404 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" podStartSLOduration=1.957586138 podStartE2EDuration="2.391384449s" podCreationTimestamp="2026-01-26 11:37:01 +0000 UTC" firstStartedPulling="2026-01-26 11:37:02.378416915 +0000 UTC m=+2521.412457631" lastFinishedPulling="2026-01-26 11:37:02.812215226 +0000 UTC m=+2521.846255942" observedRunningTime="2026-01-26 11:37:03.384533087 +0000 UTC m=+2522.418573803" watchObservedRunningTime="2026-01-26 11:37:03.391384449 +0000 UTC m=+2522.425425165" Jan 26 11:37:03 crc kubenswrapper[4619]: E0126 11:37:03.398982 4619 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaa2c414_823b_48a9_a59d_1f02d1708f9f.slice/crio-68435a930ff71bdf32e9ffdfe277af0bf13f3ff3d08cdef2b78a564eb81780f5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaa2c414_823b_48a9_a59d_1f02d1708f9f.slice\": RecentStats: unable to find data in memory cache]" Jan 26 11:37:09 crc kubenswrapper[4619]: I0126 11:37:09.260867 4619 scope.go:117] "RemoveContainer" containerID="47a8165b1f28a2eae2faa552439f23be0c102480ff1ffbb9d2b68f383bce0e1b" Jan 26 11:37:09 crc kubenswrapper[4619]: E0126 11:37:09.261755 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:37:13 crc kubenswrapper[4619]: E0126 11:37:13.617554 4619 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaa2c414_823b_48a9_a59d_1f02d1708f9f.slice/crio-68435a930ff71bdf32e9ffdfe277af0bf13f3ff3d08cdef2b78a564eb81780f5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaa2c414_823b_48a9_a59d_1f02d1708f9f.slice\": RecentStats: unable to find data in memory cache]" Jan 26 11:37:23 crc kubenswrapper[4619]: E0126 11:37:23.850062 4619 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaa2c414_823b_48a9_a59d_1f02d1708f9f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaa2c414_823b_48a9_a59d_1f02d1708f9f.slice/crio-68435a930ff71bdf32e9ffdfe277af0bf13f3ff3d08cdef2b78a564eb81780f5\": RecentStats: unable to find data in memory cache]" Jan 26 11:37:24 crc kubenswrapper[4619]: I0126 11:37:24.262159 4619 scope.go:117] "RemoveContainer" containerID="47a8165b1f28a2eae2faa552439f23be0c102480ff1ffbb9d2b68f383bce0e1b" Jan 26 11:37:24 crc kubenswrapper[4619]: E0126 11:37:24.262845 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:37:34 crc kubenswrapper[4619]: E0126 11:37:34.067286 4619 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaa2c414_823b_48a9_a59d_1f02d1708f9f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaa2c414_823b_48a9_a59d_1f02d1708f9f.slice/crio-68435a930ff71bdf32e9ffdfe277af0bf13f3ff3d08cdef2b78a564eb81780f5\": RecentStats: unable to find data in memory cache]" Jan 26 11:37:36 crc kubenswrapper[4619]: I0126 11:37:36.261811 4619 scope.go:117] "RemoveContainer" containerID="47a8165b1f28a2eae2faa552439f23be0c102480ff1ffbb9d2b68f383bce0e1b" Jan 26 11:37:36 crc kubenswrapper[4619]: E0126 11:37:36.262562 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:37:44 crc kubenswrapper[4619]: E0126 11:37:44.330472 4619 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaa2c414_823b_48a9_a59d_1f02d1708f9f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaa2c414_823b_48a9_a59d_1f02d1708f9f.slice/crio-68435a930ff71bdf32e9ffdfe277af0bf13f3ff3d08cdef2b78a564eb81780f5\": RecentStats: unable to find data in memory cache]" Jan 26 11:37:50 crc kubenswrapper[4619]: I0126 11:37:50.261234 4619 scope.go:117] "RemoveContainer" containerID="47a8165b1f28a2eae2faa552439f23be0c102480ff1ffbb9d2b68f383bce0e1b" Jan 26 11:37:50 crc kubenswrapper[4619]: I0126 11:37:50.747486 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerStarted","Data":"d028f92205bfbf0fdeda46223cedf4233a9bef0dd0215cc4b9182d465ea565f9"} Jan 26 11:37:54 crc kubenswrapper[4619]: E0126 11:37:54.606815 4619 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaa2c414_823b_48a9_a59d_1f02d1708f9f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaa2c414_823b_48a9_a59d_1f02d1708f9f.slice/crio-68435a930ff71bdf32e9ffdfe277af0bf13f3ff3d08cdef2b78a564eb81780f5\": RecentStats: unable to find data in memory cache]" Jan 26 11:39:18 crc kubenswrapper[4619]: I0126 11:39:18.321945 4619 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="a99ba972-f513-421c-b25d-c8ecbc095c0f" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": read tcp 10.217.0.2:50982->10.217.0.207:8774: read: connection reset by peer" Jan 26 11:39:35 crc kubenswrapper[4619]: I0126 11:39:35.864067 4619 generic.go:334] "Generic (PLEG): container finished" podID="b641ed88-2b99-4794-a48d-906d2355417d" containerID="4cef260c79f8175e3cff84eb1a40e54dd1364b2977139a28de2bbae2e25add3f" exitCode=0 Jan 26 11:39:35 crc kubenswrapper[4619]: I0126 11:39:35.864705 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" event={"ID":"b641ed88-2b99-4794-a48d-906d2355417d","Type":"ContainerDied","Data":"4cef260c79f8175e3cff84eb1a40e54dd1364b2977139a28de2bbae2e25add3f"} Jan 26 11:39:37 crc kubenswrapper[4619]: I0126 11:39:37.322256 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" Jan 26 11:39:37 crc kubenswrapper[4619]: I0126 11:39:37.456784 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-nova-cell1-compute-config-0\") pod \"b641ed88-2b99-4794-a48d-906d2355417d\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " Jan 26 11:39:37 crc kubenswrapper[4619]: I0126 11:39:37.456835 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-nova-cell1-compute-config-1\") pod \"b641ed88-2b99-4794-a48d-906d2355417d\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " Jan 26 11:39:37 crc kubenswrapper[4619]: I0126 11:39:37.456889 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9kxr\" (UniqueName: \"kubernetes.io/projected/b641ed88-2b99-4794-a48d-906d2355417d-kube-api-access-d9kxr\") pod \"b641ed88-2b99-4794-a48d-906d2355417d\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " Jan 26 11:39:37 crc kubenswrapper[4619]: I0126 11:39:37.456929 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-nova-migration-ssh-key-1\") pod \"b641ed88-2b99-4794-a48d-906d2355417d\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " Jan 26 11:39:37 crc kubenswrapper[4619]: I0126 11:39:37.456993 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b641ed88-2b99-4794-a48d-906d2355417d-nova-extra-config-0\") pod \"b641ed88-2b99-4794-a48d-906d2355417d\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " Jan 26 11:39:37 crc kubenswrapper[4619]: I0126 11:39:37.457023 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-nova-combined-ca-bundle\") pod \"b641ed88-2b99-4794-a48d-906d2355417d\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " Jan 26 11:39:37 crc kubenswrapper[4619]: I0126 11:39:37.457053 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-ssh-key-openstack-edpm-ipam\") pod \"b641ed88-2b99-4794-a48d-906d2355417d\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " Jan 26 11:39:37 crc kubenswrapper[4619]: I0126 11:39:37.457723 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-nova-migration-ssh-key-0\") pod \"b641ed88-2b99-4794-a48d-906d2355417d\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " Jan 26 11:39:37 crc kubenswrapper[4619]: I0126 11:39:37.457755 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-inventory\") pod \"b641ed88-2b99-4794-a48d-906d2355417d\" (UID: \"b641ed88-2b99-4794-a48d-906d2355417d\") " Jan 26 11:39:37 crc kubenswrapper[4619]: I0126 11:39:37.462469 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "b641ed88-2b99-4794-a48d-906d2355417d" (UID: "b641ed88-2b99-4794-a48d-906d2355417d"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:37 crc kubenswrapper[4619]: I0126 11:39:37.466676 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b641ed88-2b99-4794-a48d-906d2355417d-kube-api-access-d9kxr" (OuterVolumeSpecName: "kube-api-access-d9kxr") pod "b641ed88-2b99-4794-a48d-906d2355417d" (UID: "b641ed88-2b99-4794-a48d-906d2355417d"). InnerVolumeSpecName "kube-api-access-d9kxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:39:37 crc kubenswrapper[4619]: I0126 11:39:37.486579 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-inventory" (OuterVolumeSpecName: "inventory") pod "b641ed88-2b99-4794-a48d-906d2355417d" (UID: "b641ed88-2b99-4794-a48d-906d2355417d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:37 crc kubenswrapper[4619]: I0126 11:39:37.492588 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "b641ed88-2b99-4794-a48d-906d2355417d" (UID: "b641ed88-2b99-4794-a48d-906d2355417d"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:37 crc kubenswrapper[4619]: I0126 11:39:37.492763 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "b641ed88-2b99-4794-a48d-906d2355417d" (UID: "b641ed88-2b99-4794-a48d-906d2355417d"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:37 crc kubenswrapper[4619]: I0126 11:39:37.497450 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "b641ed88-2b99-4794-a48d-906d2355417d" (UID: "b641ed88-2b99-4794-a48d-906d2355417d"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:37 crc kubenswrapper[4619]: I0126 11:39:37.506827 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b641ed88-2b99-4794-a48d-906d2355417d-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "b641ed88-2b99-4794-a48d-906d2355417d" (UID: "b641ed88-2b99-4794-a48d-906d2355417d"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:39:37 crc kubenswrapper[4619]: I0126 11:39:37.508860 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b641ed88-2b99-4794-a48d-906d2355417d" (UID: "b641ed88-2b99-4794-a48d-906d2355417d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:37 crc kubenswrapper[4619]: I0126 11:39:37.516488 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "b641ed88-2b99-4794-a48d-906d2355417d" (UID: "b641ed88-2b99-4794-a48d-906d2355417d"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:37 crc kubenswrapper[4619]: I0126 11:39:37.559997 4619 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:37 crc kubenswrapper[4619]: I0126 11:39:37.560026 4619 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:37 crc kubenswrapper[4619]: I0126 11:39:37.560035 4619 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:37 crc kubenswrapper[4619]: I0126 11:39:37.560045 4619 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:37 crc kubenswrapper[4619]: I0126 11:39:37.560054 4619 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:37 crc kubenswrapper[4619]: I0126 11:39:37.560063 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9kxr\" (UniqueName: \"kubernetes.io/projected/b641ed88-2b99-4794-a48d-906d2355417d-kube-api-access-d9kxr\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:37 crc kubenswrapper[4619]: I0126 11:39:37.560072 4619 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:37 crc kubenswrapper[4619]: I0126 11:39:37.560080 4619 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b641ed88-2b99-4794-a48d-906d2355417d-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:37 crc kubenswrapper[4619]: I0126 11:39:37.560089 4619 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b641ed88-2b99-4794-a48d-906d2355417d-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:37 crc kubenswrapper[4619]: I0126 11:39:37.882899 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" event={"ID":"b641ed88-2b99-4794-a48d-906d2355417d","Type":"ContainerDied","Data":"1feec97b55643378084f67444204b6e045069385bdf72d5bbc2346b4ac32a5f9"} Jan 26 11:39:37 crc kubenswrapper[4619]: I0126 11:39:37.883879 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1feec97b55643378084f67444204b6e045069385bdf72d5bbc2346b4ac32a5f9" Jan 26 11:39:37 crc kubenswrapper[4619]: I0126 11:39:37.883972 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xjjng" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.015030 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq"] Jan 26 11:39:38 crc kubenswrapper[4619]: E0126 11:39:38.015502 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b641ed88-2b99-4794-a48d-906d2355417d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.015522 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="b641ed88-2b99-4794-a48d-906d2355417d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.016024 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="b641ed88-2b99-4794-a48d-906d2355417d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.016830 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.025190 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq"] Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.029602 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.029946 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.030101 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.030236 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.030268 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fn84q" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.170179 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq\" (UID: \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.170220 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq\" (UID: \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.170347 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq\" (UID: \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.170576 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq\" (UID: \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.170723 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq\" (UID: \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.170772 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq\" (UID: \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.171182 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjcht\" (UniqueName: \"kubernetes.io/projected/c6a5b4c8-fd30-49e5-853a-6512124a63ca-kube-api-access-fjcht\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq\" (UID: \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.273312 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq\" (UID: \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.273359 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq\" (UID: \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.273399 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq\" (UID: \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.273466 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq\" (UID: \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.273487 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq\" (UID: \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.273507 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq\" (UID: \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.273584 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjcht\" (UniqueName: \"kubernetes.io/projected/c6a5b4c8-fd30-49e5-853a-6512124a63ca-kube-api-access-fjcht\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq\" (UID: \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.277768 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq\" (UID: \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.278123 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq\" (UID: \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.278789 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq\" (UID: \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.278964 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq\" (UID: \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.280157 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq\" (UID: \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.280315 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq\" (UID: \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.292863 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjcht\" (UniqueName: \"kubernetes.io/projected/c6a5b4c8-fd30-49e5-853a-6512124a63ca-kube-api-access-fjcht\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq\" (UID: \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.333320 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq" Jan 26 11:39:38 crc kubenswrapper[4619]: I0126 11:39:38.899505 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq"] Jan 26 11:39:39 crc kubenswrapper[4619]: I0126 11:39:39.910829 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq" event={"ID":"c6a5b4c8-fd30-49e5-853a-6512124a63ca","Type":"ContainerStarted","Data":"8e8db7589febb96bf3c58f757874e002bfd3dcd716a534fe6bd1b9d3db1942b8"} Jan 26 11:39:39 crc kubenswrapper[4619]: I0126 11:39:39.913389 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq" event={"ID":"c6a5b4c8-fd30-49e5-853a-6512124a63ca","Type":"ContainerStarted","Data":"1966ce2d129e46b43c0cfdc98d4c0993745522ff7bc43ccc8a8e10f45f9980f9"} Jan 26 11:39:39 crc kubenswrapper[4619]: I0126 11:39:39.946151 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq" podStartSLOduration=2.4668821149999998 podStartE2EDuration="2.946128139s" podCreationTimestamp="2026-01-26 11:39:37 +0000 UTC" firstStartedPulling="2026-01-26 11:39:38.924652266 +0000 UTC m=+2677.958692982" lastFinishedPulling="2026-01-26 11:39:39.40389828 +0000 UTC m=+2678.437939006" observedRunningTime="2026-01-26 11:39:39.9389426 +0000 UTC m=+2678.972983316" watchObservedRunningTime="2026-01-26 11:39:39.946128139 +0000 UTC m=+2678.980168855" Jan 26 11:40:14 crc kubenswrapper[4619]: I0126 11:40:14.234508 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:40:14 crc kubenswrapper[4619]: I0126 11:40:14.235010 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:40:44 crc kubenswrapper[4619]: I0126 11:40:44.233803 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:40:44 crc kubenswrapper[4619]: I0126 11:40:44.234367 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:41:01 crc kubenswrapper[4619]: I0126 11:41:01.076880 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2mdgt"] Jan 26 11:41:01 crc kubenswrapper[4619]: I0126 11:41:01.079738 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2mdgt" Jan 26 11:41:01 crc kubenswrapper[4619]: I0126 11:41:01.118010 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2mdgt"] Jan 26 11:41:01 crc kubenswrapper[4619]: I0126 11:41:01.249891 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a099e8b-0bf4-4042-aec8-da3f754f523e-utilities\") pod \"redhat-marketplace-2mdgt\" (UID: \"5a099e8b-0bf4-4042-aec8-da3f754f523e\") " pod="openshift-marketplace/redhat-marketplace-2mdgt" Jan 26 11:41:01 crc kubenswrapper[4619]: I0126 11:41:01.250030 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd5vs\" (UniqueName: \"kubernetes.io/projected/5a099e8b-0bf4-4042-aec8-da3f754f523e-kube-api-access-pd5vs\") pod \"redhat-marketplace-2mdgt\" (UID: \"5a099e8b-0bf4-4042-aec8-da3f754f523e\") " pod="openshift-marketplace/redhat-marketplace-2mdgt" Jan 26 11:41:01 crc kubenswrapper[4619]: I0126 11:41:01.250143 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a099e8b-0bf4-4042-aec8-da3f754f523e-catalog-content\") pod \"redhat-marketplace-2mdgt\" (UID: \"5a099e8b-0bf4-4042-aec8-da3f754f523e\") " pod="openshift-marketplace/redhat-marketplace-2mdgt" Jan 26 11:41:01 crc kubenswrapper[4619]: I0126 11:41:01.352261 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a099e8b-0bf4-4042-aec8-da3f754f523e-catalog-content\") pod \"redhat-marketplace-2mdgt\" (UID: \"5a099e8b-0bf4-4042-aec8-da3f754f523e\") " pod="openshift-marketplace/redhat-marketplace-2mdgt" Jan 26 11:41:01 crc kubenswrapper[4619]: I0126 11:41:01.352792 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a099e8b-0bf4-4042-aec8-da3f754f523e-utilities\") pod \"redhat-marketplace-2mdgt\" (UID: \"5a099e8b-0bf4-4042-aec8-da3f754f523e\") " pod="openshift-marketplace/redhat-marketplace-2mdgt" Jan 26 11:41:01 crc kubenswrapper[4619]: I0126 11:41:01.352985 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd5vs\" (UniqueName: \"kubernetes.io/projected/5a099e8b-0bf4-4042-aec8-da3f754f523e-kube-api-access-pd5vs\") pod \"redhat-marketplace-2mdgt\" (UID: \"5a099e8b-0bf4-4042-aec8-da3f754f523e\") " pod="openshift-marketplace/redhat-marketplace-2mdgt" Jan 26 11:41:01 crc kubenswrapper[4619]: I0126 11:41:01.352990 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a099e8b-0bf4-4042-aec8-da3f754f523e-catalog-content\") pod \"redhat-marketplace-2mdgt\" (UID: \"5a099e8b-0bf4-4042-aec8-da3f754f523e\") " pod="openshift-marketplace/redhat-marketplace-2mdgt" Jan 26 11:41:01 crc kubenswrapper[4619]: I0126 11:41:01.353228 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a099e8b-0bf4-4042-aec8-da3f754f523e-utilities\") pod \"redhat-marketplace-2mdgt\" (UID: \"5a099e8b-0bf4-4042-aec8-da3f754f523e\") " pod="openshift-marketplace/redhat-marketplace-2mdgt" Jan 26 11:41:01 crc kubenswrapper[4619]: I0126 11:41:01.380515 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd5vs\" (UniqueName: \"kubernetes.io/projected/5a099e8b-0bf4-4042-aec8-da3f754f523e-kube-api-access-pd5vs\") pod \"redhat-marketplace-2mdgt\" (UID: \"5a099e8b-0bf4-4042-aec8-da3f754f523e\") " pod="openshift-marketplace/redhat-marketplace-2mdgt" Jan 26 11:41:01 crc kubenswrapper[4619]: I0126 11:41:01.396590 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2mdgt" Jan 26 11:41:01 crc kubenswrapper[4619]: I0126 11:41:01.890150 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2mdgt"] Jan 26 11:41:02 crc kubenswrapper[4619]: I0126 11:41:02.880779 4619 generic.go:334] "Generic (PLEG): container finished" podID="5a099e8b-0bf4-4042-aec8-da3f754f523e" containerID="53ea0c352366bfe4f34e725c2f9d80db299fff7c4d13e28f6b39d55164bdbb94" exitCode=0 Jan 26 11:41:02 crc kubenswrapper[4619]: I0126 11:41:02.881140 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2mdgt" event={"ID":"5a099e8b-0bf4-4042-aec8-da3f754f523e","Type":"ContainerDied","Data":"53ea0c352366bfe4f34e725c2f9d80db299fff7c4d13e28f6b39d55164bdbb94"} Jan 26 11:41:02 crc kubenswrapper[4619]: I0126 11:41:02.881167 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2mdgt" event={"ID":"5a099e8b-0bf4-4042-aec8-da3f754f523e","Type":"ContainerStarted","Data":"1800641e7bf0d95219819b6fdf2f8763de22751a7079f25dadf438db4931c94f"} Jan 26 11:41:03 crc kubenswrapper[4619]: I0126 11:41:03.890798 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2mdgt" event={"ID":"5a099e8b-0bf4-4042-aec8-da3f754f523e","Type":"ContainerStarted","Data":"fd18d7ea2a08507faa3e955a43169bf9d2c0cde94f9b9b6988788a3d167311cf"} Jan 26 11:41:04 crc kubenswrapper[4619]: I0126 11:41:04.901591 4619 generic.go:334] "Generic (PLEG): container finished" podID="5a099e8b-0bf4-4042-aec8-da3f754f523e" containerID="fd18d7ea2a08507faa3e955a43169bf9d2c0cde94f9b9b6988788a3d167311cf" exitCode=0 Jan 26 11:41:04 crc kubenswrapper[4619]: I0126 11:41:04.901688 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2mdgt" event={"ID":"5a099e8b-0bf4-4042-aec8-da3f754f523e","Type":"ContainerDied","Data":"fd18d7ea2a08507faa3e955a43169bf9d2c0cde94f9b9b6988788a3d167311cf"} Jan 26 11:41:06 crc kubenswrapper[4619]: I0126 11:41:06.920361 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2mdgt" event={"ID":"5a099e8b-0bf4-4042-aec8-da3f754f523e","Type":"ContainerStarted","Data":"f3ac265eea696f606675fcdd94d8b66319a31f03da76e785f622a3e2fccf32fc"} Jan 26 11:41:06 crc kubenswrapper[4619]: I0126 11:41:06.944546 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2mdgt" podStartSLOduration=3.181039654 podStartE2EDuration="5.944525945s" podCreationTimestamp="2026-01-26 11:41:01 +0000 UTC" firstStartedPulling="2026-01-26 11:41:02.883814464 +0000 UTC m=+2761.917855180" lastFinishedPulling="2026-01-26 11:41:05.647300755 +0000 UTC m=+2764.681341471" observedRunningTime="2026-01-26 11:41:06.943006125 +0000 UTC m=+2765.977046841" watchObservedRunningTime="2026-01-26 11:41:06.944525945 +0000 UTC m=+2765.978566661" Jan 26 11:41:10 crc kubenswrapper[4619]: I0126 11:41:10.176541 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7z4tj"] Jan 26 11:41:10 crc kubenswrapper[4619]: I0126 11:41:10.179898 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7z4tj" Jan 26 11:41:10 crc kubenswrapper[4619]: I0126 11:41:10.205776 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7z4tj"] Jan 26 11:41:10 crc kubenswrapper[4619]: I0126 11:41:10.319318 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcb2daaa-8826-4542-9c25-e97ed146abe8-utilities\") pod \"community-operators-7z4tj\" (UID: \"fcb2daaa-8826-4542-9c25-e97ed146abe8\") " pod="openshift-marketplace/community-operators-7z4tj" Jan 26 11:41:10 crc kubenswrapper[4619]: I0126 11:41:10.319374 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcb2daaa-8826-4542-9c25-e97ed146abe8-catalog-content\") pod \"community-operators-7z4tj\" (UID: \"fcb2daaa-8826-4542-9c25-e97ed146abe8\") " pod="openshift-marketplace/community-operators-7z4tj" Jan 26 11:41:10 crc kubenswrapper[4619]: I0126 11:41:10.319405 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mv5g\" (UniqueName: \"kubernetes.io/projected/fcb2daaa-8826-4542-9c25-e97ed146abe8-kube-api-access-9mv5g\") pod \"community-operators-7z4tj\" (UID: \"fcb2daaa-8826-4542-9c25-e97ed146abe8\") " pod="openshift-marketplace/community-operators-7z4tj" Jan 26 11:41:10 crc kubenswrapper[4619]: I0126 11:41:10.421795 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcb2daaa-8826-4542-9c25-e97ed146abe8-utilities\") pod \"community-operators-7z4tj\" (UID: \"fcb2daaa-8826-4542-9c25-e97ed146abe8\") " pod="openshift-marketplace/community-operators-7z4tj" Jan 26 11:41:10 crc kubenswrapper[4619]: I0126 11:41:10.422135 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcb2daaa-8826-4542-9c25-e97ed146abe8-catalog-content\") pod \"community-operators-7z4tj\" (UID: \"fcb2daaa-8826-4542-9c25-e97ed146abe8\") " pod="openshift-marketplace/community-operators-7z4tj" Jan 26 11:41:10 crc kubenswrapper[4619]: I0126 11:41:10.422267 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mv5g\" (UniqueName: \"kubernetes.io/projected/fcb2daaa-8826-4542-9c25-e97ed146abe8-kube-api-access-9mv5g\") pod \"community-operators-7z4tj\" (UID: \"fcb2daaa-8826-4542-9c25-e97ed146abe8\") " pod="openshift-marketplace/community-operators-7z4tj" Jan 26 11:41:10 crc kubenswrapper[4619]: I0126 11:41:10.422307 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcb2daaa-8826-4542-9c25-e97ed146abe8-utilities\") pod \"community-operators-7z4tj\" (UID: \"fcb2daaa-8826-4542-9c25-e97ed146abe8\") " pod="openshift-marketplace/community-operators-7z4tj" Jan 26 11:41:10 crc kubenswrapper[4619]: I0126 11:41:10.422525 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcb2daaa-8826-4542-9c25-e97ed146abe8-catalog-content\") pod \"community-operators-7z4tj\" (UID: \"fcb2daaa-8826-4542-9c25-e97ed146abe8\") " pod="openshift-marketplace/community-operators-7z4tj" Jan 26 11:41:10 crc kubenswrapper[4619]: I0126 11:41:10.444752 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mv5g\" (UniqueName: \"kubernetes.io/projected/fcb2daaa-8826-4542-9c25-e97ed146abe8-kube-api-access-9mv5g\") pod \"community-operators-7z4tj\" (UID: \"fcb2daaa-8826-4542-9c25-e97ed146abe8\") " pod="openshift-marketplace/community-operators-7z4tj" Jan 26 11:41:10 crc kubenswrapper[4619]: I0126 11:41:10.495811 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7z4tj" Jan 26 11:41:11 crc kubenswrapper[4619]: I0126 11:41:11.065393 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7z4tj"] Jan 26 11:41:11 crc kubenswrapper[4619]: I0126 11:41:11.397732 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2mdgt" Jan 26 11:41:11 crc kubenswrapper[4619]: I0126 11:41:11.398110 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2mdgt" Jan 26 11:41:11 crc kubenswrapper[4619]: I0126 11:41:11.453928 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2mdgt" Jan 26 11:41:11 crc kubenswrapper[4619]: I0126 11:41:11.970712 4619 generic.go:334] "Generic (PLEG): container finished" podID="fcb2daaa-8826-4542-9c25-e97ed146abe8" containerID="378f50cdd838e536c9133d093e787d09aa72cf44e04e675b72828dc3a4b94624" exitCode=0 Jan 26 11:41:11 crc kubenswrapper[4619]: I0126 11:41:11.971293 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7z4tj" event={"ID":"fcb2daaa-8826-4542-9c25-e97ed146abe8","Type":"ContainerDied","Data":"378f50cdd838e536c9133d093e787d09aa72cf44e04e675b72828dc3a4b94624"} Jan 26 11:41:11 crc kubenswrapper[4619]: I0126 11:41:11.971370 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7z4tj" event={"ID":"fcb2daaa-8826-4542-9c25-e97ed146abe8","Type":"ContainerStarted","Data":"5e9c76b14705871488388f34e1bd963e99f5df244231a3b5d5546ff094582ed0"} Jan 26 11:41:12 crc kubenswrapper[4619]: I0126 11:41:12.021392 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2mdgt" Jan 26 11:41:12 crc kubenswrapper[4619]: I0126 11:41:12.979362 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7z4tj" event={"ID":"fcb2daaa-8826-4542-9c25-e97ed146abe8","Type":"ContainerStarted","Data":"7867f77fafd927630dabdd827e2d86c5656d933f7a46eb2f6d8a16bda81c4c40"} Jan 26 11:41:13 crc kubenswrapper[4619]: I0126 11:41:13.739601 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2mdgt"] Jan 26 11:41:13 crc kubenswrapper[4619]: I0126 11:41:13.987086 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2mdgt" podUID="5a099e8b-0bf4-4042-aec8-da3f754f523e" containerName="registry-server" containerID="cri-o://f3ac265eea696f606675fcdd94d8b66319a31f03da76e785f622a3e2fccf32fc" gracePeriod=2 Jan 26 11:41:14 crc kubenswrapper[4619]: I0126 11:41:14.234906 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:41:14 crc kubenswrapper[4619]: I0126 11:41:14.234977 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:41:14 crc kubenswrapper[4619]: I0126 11:41:14.235038 4619 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" Jan 26 11:41:14 crc kubenswrapper[4619]: I0126 11:41:14.235704 4619 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d028f92205bfbf0fdeda46223cedf4233a9bef0dd0215cc4b9182d465ea565f9"} pod="openshift-machine-config-operator/machine-config-daemon-28hd4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 11:41:14 crc kubenswrapper[4619]: I0126 11:41:14.235773 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" containerID="cri-o://d028f92205bfbf0fdeda46223cedf4233a9bef0dd0215cc4b9182d465ea565f9" gracePeriod=600 Jan 26 11:41:15 crc kubenswrapper[4619]: I0126 11:41:15.000196 4619 generic.go:334] "Generic (PLEG): container finished" podID="5a099e8b-0bf4-4042-aec8-da3f754f523e" containerID="f3ac265eea696f606675fcdd94d8b66319a31f03da76e785f622a3e2fccf32fc" exitCode=0 Jan 26 11:41:15 crc kubenswrapper[4619]: I0126 11:41:15.000273 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2mdgt" event={"ID":"5a099e8b-0bf4-4042-aec8-da3f754f523e","Type":"ContainerDied","Data":"f3ac265eea696f606675fcdd94d8b66319a31f03da76e785f622a3e2fccf32fc"} Jan 26 11:41:15 crc kubenswrapper[4619]: I0126 11:41:15.000597 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2mdgt" event={"ID":"5a099e8b-0bf4-4042-aec8-da3f754f523e","Type":"ContainerDied","Data":"1800641e7bf0d95219819b6fdf2f8763de22751a7079f25dadf438db4931c94f"} Jan 26 11:41:15 crc kubenswrapper[4619]: I0126 11:41:15.000622 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1800641e7bf0d95219819b6fdf2f8763de22751a7079f25dadf438db4931c94f" Jan 26 11:41:15 crc kubenswrapper[4619]: I0126 11:41:15.003310 4619 generic.go:334] "Generic (PLEG): container finished" podID="fcb2daaa-8826-4542-9c25-e97ed146abe8" containerID="7867f77fafd927630dabdd827e2d86c5656d933f7a46eb2f6d8a16bda81c4c40" exitCode=0 Jan 26 11:41:15 crc kubenswrapper[4619]: I0126 11:41:15.003375 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7z4tj" event={"ID":"fcb2daaa-8826-4542-9c25-e97ed146abe8","Type":"ContainerDied","Data":"7867f77fafd927630dabdd827e2d86c5656d933f7a46eb2f6d8a16bda81c4c40"} Jan 26 11:41:15 crc kubenswrapper[4619]: I0126 11:41:15.011335 4619 generic.go:334] "Generic (PLEG): container finished" podID="f33a41bb-6406-4c73-8024-4acd72817832" containerID="d028f92205bfbf0fdeda46223cedf4233a9bef0dd0215cc4b9182d465ea565f9" exitCode=0 Jan 26 11:41:15 crc kubenswrapper[4619]: I0126 11:41:15.011380 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerDied","Data":"d028f92205bfbf0fdeda46223cedf4233a9bef0dd0215cc4b9182d465ea565f9"} Jan 26 11:41:15 crc kubenswrapper[4619]: I0126 11:41:15.011433 4619 scope.go:117] "RemoveContainer" containerID="47a8165b1f28a2eae2faa552439f23be0c102480ff1ffbb9d2b68f383bce0e1b" Jan 26 11:41:15 crc kubenswrapper[4619]: I0126 11:41:15.046171 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2mdgt" Jan 26 11:41:15 crc kubenswrapper[4619]: I0126 11:41:15.216092 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pd5vs\" (UniqueName: \"kubernetes.io/projected/5a099e8b-0bf4-4042-aec8-da3f754f523e-kube-api-access-pd5vs\") pod \"5a099e8b-0bf4-4042-aec8-da3f754f523e\" (UID: \"5a099e8b-0bf4-4042-aec8-da3f754f523e\") " Jan 26 11:41:15 crc kubenswrapper[4619]: I0126 11:41:15.216225 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a099e8b-0bf4-4042-aec8-da3f754f523e-catalog-content\") pod \"5a099e8b-0bf4-4042-aec8-da3f754f523e\" (UID: \"5a099e8b-0bf4-4042-aec8-da3f754f523e\") " Jan 26 11:41:15 crc kubenswrapper[4619]: I0126 11:41:15.216344 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a099e8b-0bf4-4042-aec8-da3f754f523e-utilities\") pod \"5a099e8b-0bf4-4042-aec8-da3f754f523e\" (UID: \"5a099e8b-0bf4-4042-aec8-da3f754f523e\") " Jan 26 11:41:15 crc kubenswrapper[4619]: I0126 11:41:15.217292 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a099e8b-0bf4-4042-aec8-da3f754f523e-utilities" (OuterVolumeSpecName: "utilities") pod "5a099e8b-0bf4-4042-aec8-da3f754f523e" (UID: "5a099e8b-0bf4-4042-aec8-da3f754f523e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:41:15 crc kubenswrapper[4619]: I0126 11:41:15.227469 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a099e8b-0bf4-4042-aec8-da3f754f523e-kube-api-access-pd5vs" (OuterVolumeSpecName: "kube-api-access-pd5vs") pod "5a099e8b-0bf4-4042-aec8-da3f754f523e" (UID: "5a099e8b-0bf4-4042-aec8-da3f754f523e"). InnerVolumeSpecName "kube-api-access-pd5vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:41:15 crc kubenswrapper[4619]: I0126 11:41:15.240192 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a099e8b-0bf4-4042-aec8-da3f754f523e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a099e8b-0bf4-4042-aec8-da3f754f523e" (UID: "5a099e8b-0bf4-4042-aec8-da3f754f523e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:41:15 crc kubenswrapper[4619]: I0126 11:41:15.318398 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pd5vs\" (UniqueName: \"kubernetes.io/projected/5a099e8b-0bf4-4042-aec8-da3f754f523e-kube-api-access-pd5vs\") on node \"crc\" DevicePath \"\"" Jan 26 11:41:15 crc kubenswrapper[4619]: I0126 11:41:15.318428 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a099e8b-0bf4-4042-aec8-da3f754f523e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:41:15 crc kubenswrapper[4619]: I0126 11:41:15.318437 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a099e8b-0bf4-4042-aec8-da3f754f523e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:41:16 crc kubenswrapper[4619]: I0126 11:41:16.034961 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2mdgt" Jan 26 11:41:16 crc kubenswrapper[4619]: I0126 11:41:16.037486 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerStarted","Data":"8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d"} Jan 26 11:41:16 crc kubenswrapper[4619]: I0126 11:41:16.087072 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2mdgt"] Jan 26 11:41:16 crc kubenswrapper[4619]: I0126 11:41:16.108807 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2mdgt"] Jan 26 11:41:17 crc kubenswrapper[4619]: I0126 11:41:17.046428 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7z4tj" event={"ID":"fcb2daaa-8826-4542-9c25-e97ed146abe8","Type":"ContainerStarted","Data":"1db43b8477e35596f118b63c71e0d29d12679d7db6a7fcbf253ca810d83c64dc"} Jan 26 11:41:17 crc kubenswrapper[4619]: I0126 11:41:17.069825 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7z4tj" podStartSLOduration=3.236378728 podStartE2EDuration="7.069809168s" podCreationTimestamp="2026-01-26 11:41:10 +0000 UTC" firstStartedPulling="2026-01-26 11:41:11.97307364 +0000 UTC m=+2771.007114356" lastFinishedPulling="2026-01-26 11:41:15.80650408 +0000 UTC m=+2774.840544796" observedRunningTime="2026-01-26 11:41:17.063236316 +0000 UTC m=+2776.097277032" watchObservedRunningTime="2026-01-26 11:41:17.069809168 +0000 UTC m=+2776.103849884" Jan 26 11:41:17 crc kubenswrapper[4619]: I0126 11:41:17.283731 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a099e8b-0bf4-4042-aec8-da3f754f523e" path="/var/lib/kubelet/pods/5a099e8b-0bf4-4042-aec8-da3f754f523e/volumes" Jan 26 11:41:20 crc kubenswrapper[4619]: I0126 11:41:20.496306 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7z4tj" Jan 26 11:41:20 crc kubenswrapper[4619]: I0126 11:41:20.497992 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7z4tj" Jan 26 11:41:20 crc kubenswrapper[4619]: I0126 11:41:20.553254 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7z4tj" Jan 26 11:41:21 crc kubenswrapper[4619]: I0126 11:41:21.148030 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7z4tj" Jan 26 11:41:21 crc kubenswrapper[4619]: I0126 11:41:21.742286 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7z4tj"] Jan 26 11:41:23 crc kubenswrapper[4619]: I0126 11:41:23.106670 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7z4tj" podUID="fcb2daaa-8826-4542-9c25-e97ed146abe8" containerName="registry-server" containerID="cri-o://1db43b8477e35596f118b63c71e0d29d12679d7db6a7fcbf253ca810d83c64dc" gracePeriod=2 Jan 26 11:41:23 crc kubenswrapper[4619]: I0126 11:41:23.639423 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7z4tj" Jan 26 11:41:23 crc kubenswrapper[4619]: I0126 11:41:23.793873 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcb2daaa-8826-4542-9c25-e97ed146abe8-utilities\") pod \"fcb2daaa-8826-4542-9c25-e97ed146abe8\" (UID: \"fcb2daaa-8826-4542-9c25-e97ed146abe8\") " Jan 26 11:41:23 crc kubenswrapper[4619]: I0126 11:41:23.793949 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcb2daaa-8826-4542-9c25-e97ed146abe8-catalog-content\") pod \"fcb2daaa-8826-4542-9c25-e97ed146abe8\" (UID: \"fcb2daaa-8826-4542-9c25-e97ed146abe8\") " Jan 26 11:41:23 crc kubenswrapper[4619]: I0126 11:41:23.793999 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mv5g\" (UniqueName: \"kubernetes.io/projected/fcb2daaa-8826-4542-9c25-e97ed146abe8-kube-api-access-9mv5g\") pod \"fcb2daaa-8826-4542-9c25-e97ed146abe8\" (UID: \"fcb2daaa-8826-4542-9c25-e97ed146abe8\") " Jan 26 11:41:23 crc kubenswrapper[4619]: I0126 11:41:23.794564 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcb2daaa-8826-4542-9c25-e97ed146abe8-utilities" (OuterVolumeSpecName: "utilities") pod "fcb2daaa-8826-4542-9c25-e97ed146abe8" (UID: "fcb2daaa-8826-4542-9c25-e97ed146abe8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:41:23 crc kubenswrapper[4619]: I0126 11:41:23.799782 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcb2daaa-8826-4542-9c25-e97ed146abe8-kube-api-access-9mv5g" (OuterVolumeSpecName: "kube-api-access-9mv5g") pod "fcb2daaa-8826-4542-9c25-e97ed146abe8" (UID: "fcb2daaa-8826-4542-9c25-e97ed146abe8"). InnerVolumeSpecName "kube-api-access-9mv5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:41:23 crc kubenswrapper[4619]: I0126 11:41:23.845238 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcb2daaa-8826-4542-9c25-e97ed146abe8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fcb2daaa-8826-4542-9c25-e97ed146abe8" (UID: "fcb2daaa-8826-4542-9c25-e97ed146abe8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:41:23 crc kubenswrapper[4619]: I0126 11:41:23.896409 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcb2daaa-8826-4542-9c25-e97ed146abe8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:41:23 crc kubenswrapper[4619]: I0126 11:41:23.896442 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mv5g\" (UniqueName: \"kubernetes.io/projected/fcb2daaa-8826-4542-9c25-e97ed146abe8-kube-api-access-9mv5g\") on node \"crc\" DevicePath \"\"" Jan 26 11:41:23 crc kubenswrapper[4619]: I0126 11:41:23.896454 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcb2daaa-8826-4542-9c25-e97ed146abe8-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:41:24 crc kubenswrapper[4619]: I0126 11:41:24.124036 4619 generic.go:334] "Generic (PLEG): container finished" podID="fcb2daaa-8826-4542-9c25-e97ed146abe8" containerID="1db43b8477e35596f118b63c71e0d29d12679d7db6a7fcbf253ca810d83c64dc" exitCode=0 Jan 26 11:41:24 crc kubenswrapper[4619]: I0126 11:41:24.124084 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7z4tj" event={"ID":"fcb2daaa-8826-4542-9c25-e97ed146abe8","Type":"ContainerDied","Data":"1db43b8477e35596f118b63c71e0d29d12679d7db6a7fcbf253ca810d83c64dc"} Jan 26 11:41:24 crc kubenswrapper[4619]: I0126 11:41:24.124146 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7z4tj" event={"ID":"fcb2daaa-8826-4542-9c25-e97ed146abe8","Type":"ContainerDied","Data":"5e9c76b14705871488388f34e1bd963e99f5df244231a3b5d5546ff094582ed0"} Jan 26 11:41:24 crc kubenswrapper[4619]: I0126 11:41:24.124178 4619 scope.go:117] "RemoveContainer" containerID="1db43b8477e35596f118b63c71e0d29d12679d7db6a7fcbf253ca810d83c64dc" Jan 26 11:41:24 crc kubenswrapper[4619]: I0126 11:41:24.124414 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7z4tj" Jan 26 11:41:24 crc kubenswrapper[4619]: I0126 11:41:24.154145 4619 scope.go:117] "RemoveContainer" containerID="7867f77fafd927630dabdd827e2d86c5656d933f7a46eb2f6d8a16bda81c4c40" Jan 26 11:41:24 crc kubenswrapper[4619]: I0126 11:41:24.215265 4619 scope.go:117] "RemoveContainer" containerID="378f50cdd838e536c9133d093e787d09aa72cf44e04e675b72828dc3a4b94624" Jan 26 11:41:24 crc kubenswrapper[4619]: I0126 11:41:24.224966 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7z4tj"] Jan 26 11:41:24 crc kubenswrapper[4619]: I0126 11:41:24.233919 4619 scope.go:117] "RemoveContainer" containerID="1db43b8477e35596f118b63c71e0d29d12679d7db6a7fcbf253ca810d83c64dc" Jan 26 11:41:24 crc kubenswrapper[4619]: I0126 11:41:24.233986 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7z4tj"] Jan 26 11:41:24 crc kubenswrapper[4619]: E0126 11:41:24.234366 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1db43b8477e35596f118b63c71e0d29d12679d7db6a7fcbf253ca810d83c64dc\": container with ID starting with 1db43b8477e35596f118b63c71e0d29d12679d7db6a7fcbf253ca810d83c64dc not found: ID does not exist" containerID="1db43b8477e35596f118b63c71e0d29d12679d7db6a7fcbf253ca810d83c64dc" Jan 26 11:41:24 crc kubenswrapper[4619]: I0126 11:41:24.234397 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1db43b8477e35596f118b63c71e0d29d12679d7db6a7fcbf253ca810d83c64dc"} err="failed to get container status \"1db43b8477e35596f118b63c71e0d29d12679d7db6a7fcbf253ca810d83c64dc\": rpc error: code = NotFound desc = could not find container \"1db43b8477e35596f118b63c71e0d29d12679d7db6a7fcbf253ca810d83c64dc\": container with ID starting with 1db43b8477e35596f118b63c71e0d29d12679d7db6a7fcbf253ca810d83c64dc not found: ID does not exist" Jan 26 11:41:24 crc kubenswrapper[4619]: I0126 11:41:24.234417 4619 scope.go:117] "RemoveContainer" containerID="7867f77fafd927630dabdd827e2d86c5656d933f7a46eb2f6d8a16bda81c4c40" Jan 26 11:41:24 crc kubenswrapper[4619]: E0126 11:41:24.234827 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7867f77fafd927630dabdd827e2d86c5656d933f7a46eb2f6d8a16bda81c4c40\": container with ID starting with 7867f77fafd927630dabdd827e2d86c5656d933f7a46eb2f6d8a16bda81c4c40 not found: ID does not exist" containerID="7867f77fafd927630dabdd827e2d86c5656d933f7a46eb2f6d8a16bda81c4c40" Jan 26 11:41:24 crc kubenswrapper[4619]: I0126 11:41:24.234849 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7867f77fafd927630dabdd827e2d86c5656d933f7a46eb2f6d8a16bda81c4c40"} err="failed to get container status \"7867f77fafd927630dabdd827e2d86c5656d933f7a46eb2f6d8a16bda81c4c40\": rpc error: code = NotFound desc = could not find container \"7867f77fafd927630dabdd827e2d86c5656d933f7a46eb2f6d8a16bda81c4c40\": container with ID starting with 7867f77fafd927630dabdd827e2d86c5656d933f7a46eb2f6d8a16bda81c4c40 not found: ID does not exist" Jan 26 11:41:24 crc kubenswrapper[4619]: I0126 11:41:24.234864 4619 scope.go:117] "RemoveContainer" containerID="378f50cdd838e536c9133d093e787d09aa72cf44e04e675b72828dc3a4b94624" Jan 26 11:41:24 crc kubenswrapper[4619]: E0126 11:41:24.235129 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"378f50cdd838e536c9133d093e787d09aa72cf44e04e675b72828dc3a4b94624\": container with ID starting with 378f50cdd838e536c9133d093e787d09aa72cf44e04e675b72828dc3a4b94624 not found: ID does not exist" containerID="378f50cdd838e536c9133d093e787d09aa72cf44e04e675b72828dc3a4b94624" Jan 26 11:41:24 crc kubenswrapper[4619]: I0126 11:41:24.235150 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"378f50cdd838e536c9133d093e787d09aa72cf44e04e675b72828dc3a4b94624"} err="failed to get container status \"378f50cdd838e536c9133d093e787d09aa72cf44e04e675b72828dc3a4b94624\": rpc error: code = NotFound desc = could not find container \"378f50cdd838e536c9133d093e787d09aa72cf44e04e675b72828dc3a4b94624\": container with ID starting with 378f50cdd838e536c9133d093e787d09aa72cf44e04e675b72828dc3a4b94624 not found: ID does not exist" Jan 26 11:41:25 crc kubenswrapper[4619]: I0126 11:41:25.271721 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcb2daaa-8826-4542-9c25-e97ed146abe8" path="/var/lib/kubelet/pods/fcb2daaa-8826-4542-9c25-e97ed146abe8/volumes" Jan 26 11:42:21 crc kubenswrapper[4619]: I0126 11:42:21.057593 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-flhr4"] Jan 26 11:42:21 crc kubenswrapper[4619]: E0126 11:42:21.058670 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcb2daaa-8826-4542-9c25-e97ed146abe8" containerName="extract-content" Jan 26 11:42:21 crc kubenswrapper[4619]: I0126 11:42:21.058698 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcb2daaa-8826-4542-9c25-e97ed146abe8" containerName="extract-content" Jan 26 11:42:21 crc kubenswrapper[4619]: E0126 11:42:21.058713 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcb2daaa-8826-4542-9c25-e97ed146abe8" containerName="registry-server" Jan 26 11:42:21 crc kubenswrapper[4619]: I0126 11:42:21.058722 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcb2daaa-8826-4542-9c25-e97ed146abe8" containerName="registry-server" Jan 26 11:42:21 crc kubenswrapper[4619]: E0126 11:42:21.058742 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a099e8b-0bf4-4042-aec8-da3f754f523e" containerName="extract-content" Jan 26 11:42:21 crc kubenswrapper[4619]: I0126 11:42:21.058751 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a099e8b-0bf4-4042-aec8-da3f754f523e" containerName="extract-content" Jan 26 11:42:21 crc kubenswrapper[4619]: E0126 11:42:21.058768 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcb2daaa-8826-4542-9c25-e97ed146abe8" containerName="extract-utilities" Jan 26 11:42:21 crc kubenswrapper[4619]: I0126 11:42:21.058776 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcb2daaa-8826-4542-9c25-e97ed146abe8" containerName="extract-utilities" Jan 26 11:42:21 crc kubenswrapper[4619]: E0126 11:42:21.058791 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a099e8b-0bf4-4042-aec8-da3f754f523e" containerName="extract-utilities" Jan 26 11:42:21 crc kubenswrapper[4619]: I0126 11:42:21.058798 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a099e8b-0bf4-4042-aec8-da3f754f523e" containerName="extract-utilities" Jan 26 11:42:21 crc kubenswrapper[4619]: E0126 11:42:21.058815 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a099e8b-0bf4-4042-aec8-da3f754f523e" containerName="registry-server" Jan 26 11:42:21 crc kubenswrapper[4619]: I0126 11:42:21.058823 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a099e8b-0bf4-4042-aec8-da3f754f523e" containerName="registry-server" Jan 26 11:42:21 crc kubenswrapper[4619]: I0126 11:42:21.059081 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcb2daaa-8826-4542-9c25-e97ed146abe8" containerName="registry-server" Jan 26 11:42:21 crc kubenswrapper[4619]: I0126 11:42:21.059095 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a099e8b-0bf4-4042-aec8-da3f754f523e" containerName="registry-server" Jan 26 11:42:21 crc kubenswrapper[4619]: I0126 11:42:21.061437 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-flhr4" Jan 26 11:42:21 crc kubenswrapper[4619]: I0126 11:42:21.078869 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-flhr4"] Jan 26 11:42:21 crc kubenswrapper[4619]: I0126 11:42:21.226239 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/384e8794-82a4-4388-b885-de4b3078658f-catalog-content\") pod \"redhat-operators-flhr4\" (UID: \"384e8794-82a4-4388-b885-de4b3078658f\") " pod="openshift-marketplace/redhat-operators-flhr4" Jan 26 11:42:21 crc kubenswrapper[4619]: I0126 11:42:21.226657 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/384e8794-82a4-4388-b885-de4b3078658f-utilities\") pod \"redhat-operators-flhr4\" (UID: \"384e8794-82a4-4388-b885-de4b3078658f\") " pod="openshift-marketplace/redhat-operators-flhr4" Jan 26 11:42:21 crc kubenswrapper[4619]: I0126 11:42:21.226693 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lw5r\" (UniqueName: \"kubernetes.io/projected/384e8794-82a4-4388-b885-de4b3078658f-kube-api-access-6lw5r\") pod \"redhat-operators-flhr4\" (UID: \"384e8794-82a4-4388-b885-de4b3078658f\") " pod="openshift-marketplace/redhat-operators-flhr4" Jan 26 11:42:21 crc kubenswrapper[4619]: I0126 11:42:21.328897 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/384e8794-82a4-4388-b885-de4b3078658f-catalog-content\") pod \"redhat-operators-flhr4\" (UID: \"384e8794-82a4-4388-b885-de4b3078658f\") " pod="openshift-marketplace/redhat-operators-flhr4" Jan 26 11:42:21 crc kubenswrapper[4619]: I0126 11:42:21.329020 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/384e8794-82a4-4388-b885-de4b3078658f-utilities\") pod \"redhat-operators-flhr4\" (UID: \"384e8794-82a4-4388-b885-de4b3078658f\") " pod="openshift-marketplace/redhat-operators-flhr4" Jan 26 11:42:21 crc kubenswrapper[4619]: I0126 11:42:21.329056 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lw5r\" (UniqueName: \"kubernetes.io/projected/384e8794-82a4-4388-b885-de4b3078658f-kube-api-access-6lw5r\") pod \"redhat-operators-flhr4\" (UID: \"384e8794-82a4-4388-b885-de4b3078658f\") " pod="openshift-marketplace/redhat-operators-flhr4" Jan 26 11:42:21 crc kubenswrapper[4619]: I0126 11:42:21.329780 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/384e8794-82a4-4388-b885-de4b3078658f-catalog-content\") pod \"redhat-operators-flhr4\" (UID: \"384e8794-82a4-4388-b885-de4b3078658f\") " pod="openshift-marketplace/redhat-operators-flhr4" Jan 26 11:42:21 crc kubenswrapper[4619]: I0126 11:42:21.329868 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/384e8794-82a4-4388-b885-de4b3078658f-utilities\") pod \"redhat-operators-flhr4\" (UID: \"384e8794-82a4-4388-b885-de4b3078658f\") " pod="openshift-marketplace/redhat-operators-flhr4" Jan 26 11:42:21 crc kubenswrapper[4619]: I0126 11:42:21.351686 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lw5r\" (UniqueName: \"kubernetes.io/projected/384e8794-82a4-4388-b885-de4b3078658f-kube-api-access-6lw5r\") pod \"redhat-operators-flhr4\" (UID: \"384e8794-82a4-4388-b885-de4b3078658f\") " pod="openshift-marketplace/redhat-operators-flhr4" Jan 26 11:42:21 crc kubenswrapper[4619]: I0126 11:42:21.382249 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-flhr4" Jan 26 11:42:21 crc kubenswrapper[4619]: I0126 11:42:21.927433 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-flhr4"] Jan 26 11:42:22 crc kubenswrapper[4619]: I0126 11:42:22.612234 4619 generic.go:334] "Generic (PLEG): container finished" podID="384e8794-82a4-4388-b885-de4b3078658f" containerID="6fc077aecd356545882ae0c1f6dd37a113bd98ef479dc5573f3839c406dd63d2" exitCode=0 Jan 26 11:42:22 crc kubenswrapper[4619]: I0126 11:42:22.612551 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flhr4" event={"ID":"384e8794-82a4-4388-b885-de4b3078658f","Type":"ContainerDied","Data":"6fc077aecd356545882ae0c1f6dd37a113bd98ef479dc5573f3839c406dd63d2"} Jan 26 11:42:22 crc kubenswrapper[4619]: I0126 11:42:22.612591 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flhr4" event={"ID":"384e8794-82a4-4388-b885-de4b3078658f","Type":"ContainerStarted","Data":"ec9214d5c096046f494ebc0f98ae08e5602377e4ffab9059f3b49db288c1ed70"} Jan 26 11:42:22 crc kubenswrapper[4619]: I0126 11:42:22.614953 4619 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 11:42:23 crc kubenswrapper[4619]: I0126 11:42:23.625606 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flhr4" event={"ID":"384e8794-82a4-4388-b885-de4b3078658f","Type":"ContainerStarted","Data":"b89a41dcb998bc1063abcaba4a674aded104cd2a3272746455f4087e2c098975"} Jan 26 11:42:27 crc kubenswrapper[4619]: I0126 11:42:27.659213 4619 generic.go:334] "Generic (PLEG): container finished" podID="384e8794-82a4-4388-b885-de4b3078658f" containerID="b89a41dcb998bc1063abcaba4a674aded104cd2a3272746455f4087e2c098975" exitCode=0 Jan 26 11:42:27 crc kubenswrapper[4619]: I0126 11:42:27.659752 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flhr4" event={"ID":"384e8794-82a4-4388-b885-de4b3078658f","Type":"ContainerDied","Data":"b89a41dcb998bc1063abcaba4a674aded104cd2a3272746455f4087e2c098975"} Jan 26 11:42:29 crc kubenswrapper[4619]: I0126 11:42:29.681528 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flhr4" event={"ID":"384e8794-82a4-4388-b885-de4b3078658f","Type":"ContainerStarted","Data":"37b723425499da1897ef5a6e40f653a21eb7ce0f1bd551ed9bb3cbb4fb392c01"} Jan 26 11:42:29 crc kubenswrapper[4619]: I0126 11:42:29.707455 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-flhr4" podStartSLOduration=2.776282329 podStartE2EDuration="8.707431465s" podCreationTimestamp="2026-01-26 11:42:21 +0000 UTC" firstStartedPulling="2026-01-26 11:42:22.614725325 +0000 UTC m=+2841.648766041" lastFinishedPulling="2026-01-26 11:42:28.545874461 +0000 UTC m=+2847.579915177" observedRunningTime="2026-01-26 11:42:29.697444842 +0000 UTC m=+2848.731485568" watchObservedRunningTime="2026-01-26 11:42:29.707431465 +0000 UTC m=+2848.741472191" Jan 26 11:42:31 crc kubenswrapper[4619]: I0126 11:42:31.383862 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-flhr4" Jan 26 11:42:31 crc kubenswrapper[4619]: I0126 11:42:31.384122 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-flhr4" Jan 26 11:42:32 crc kubenswrapper[4619]: I0126 11:42:32.431413 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-flhr4" podUID="384e8794-82a4-4388-b885-de4b3078658f" containerName="registry-server" probeResult="failure" output=< Jan 26 11:42:32 crc kubenswrapper[4619]: timeout: failed to connect service ":50051" within 1s Jan 26 11:42:32 crc kubenswrapper[4619]: > Jan 26 11:42:34 crc kubenswrapper[4619]: I0126 11:42:34.377404 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fgxfv"] Jan 26 11:42:34 crc kubenswrapper[4619]: I0126 11:42:34.380070 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fgxfv" Jan 26 11:42:34 crc kubenswrapper[4619]: I0126 11:42:34.392332 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fgxfv"] Jan 26 11:42:34 crc kubenswrapper[4619]: I0126 11:42:34.516678 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgcbt\" (UniqueName: \"kubernetes.io/projected/211ba2ea-2a2c-4f2a-8c5b-100279b73d2f-kube-api-access-xgcbt\") pod \"certified-operators-fgxfv\" (UID: \"211ba2ea-2a2c-4f2a-8c5b-100279b73d2f\") " pod="openshift-marketplace/certified-operators-fgxfv" Jan 26 11:42:34 crc kubenswrapper[4619]: I0126 11:42:34.516977 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/211ba2ea-2a2c-4f2a-8c5b-100279b73d2f-utilities\") pod \"certified-operators-fgxfv\" (UID: \"211ba2ea-2a2c-4f2a-8c5b-100279b73d2f\") " pod="openshift-marketplace/certified-operators-fgxfv" Jan 26 11:42:34 crc kubenswrapper[4619]: I0126 11:42:34.517218 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/211ba2ea-2a2c-4f2a-8c5b-100279b73d2f-catalog-content\") pod \"certified-operators-fgxfv\" (UID: \"211ba2ea-2a2c-4f2a-8c5b-100279b73d2f\") " pod="openshift-marketplace/certified-operators-fgxfv" Jan 26 11:42:34 crc kubenswrapper[4619]: I0126 11:42:34.618754 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/211ba2ea-2a2c-4f2a-8c5b-100279b73d2f-catalog-content\") pod \"certified-operators-fgxfv\" (UID: \"211ba2ea-2a2c-4f2a-8c5b-100279b73d2f\") " pod="openshift-marketplace/certified-operators-fgxfv" Jan 26 11:42:34 crc kubenswrapper[4619]: I0126 11:42:34.618869 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgcbt\" (UniqueName: \"kubernetes.io/projected/211ba2ea-2a2c-4f2a-8c5b-100279b73d2f-kube-api-access-xgcbt\") pod \"certified-operators-fgxfv\" (UID: \"211ba2ea-2a2c-4f2a-8c5b-100279b73d2f\") " pod="openshift-marketplace/certified-operators-fgxfv" Jan 26 11:42:34 crc kubenswrapper[4619]: I0126 11:42:34.618925 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/211ba2ea-2a2c-4f2a-8c5b-100279b73d2f-utilities\") pod \"certified-operators-fgxfv\" (UID: \"211ba2ea-2a2c-4f2a-8c5b-100279b73d2f\") " pod="openshift-marketplace/certified-operators-fgxfv" Jan 26 11:42:34 crc kubenswrapper[4619]: I0126 11:42:34.619384 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/211ba2ea-2a2c-4f2a-8c5b-100279b73d2f-catalog-content\") pod \"certified-operators-fgxfv\" (UID: \"211ba2ea-2a2c-4f2a-8c5b-100279b73d2f\") " pod="openshift-marketplace/certified-operators-fgxfv" Jan 26 11:42:34 crc kubenswrapper[4619]: I0126 11:42:34.619445 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/211ba2ea-2a2c-4f2a-8c5b-100279b73d2f-utilities\") pod \"certified-operators-fgxfv\" (UID: \"211ba2ea-2a2c-4f2a-8c5b-100279b73d2f\") " pod="openshift-marketplace/certified-operators-fgxfv" Jan 26 11:42:34 crc kubenswrapper[4619]: I0126 11:42:34.646554 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgcbt\" (UniqueName: \"kubernetes.io/projected/211ba2ea-2a2c-4f2a-8c5b-100279b73d2f-kube-api-access-xgcbt\") pod \"certified-operators-fgxfv\" (UID: \"211ba2ea-2a2c-4f2a-8c5b-100279b73d2f\") " pod="openshift-marketplace/certified-operators-fgxfv" Jan 26 11:42:34 crc kubenswrapper[4619]: I0126 11:42:34.702957 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fgxfv" Jan 26 11:42:35 crc kubenswrapper[4619]: I0126 11:42:35.061554 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fgxfv"] Jan 26 11:42:35 crc kubenswrapper[4619]: I0126 11:42:35.728992 4619 generic.go:334] "Generic (PLEG): container finished" podID="211ba2ea-2a2c-4f2a-8c5b-100279b73d2f" containerID="d822fc17cab811f76ea8972dbdd7e44197d586f7fd3c4f14b22a883042e1fca4" exitCode=0 Jan 26 11:42:35 crc kubenswrapper[4619]: I0126 11:42:35.729226 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgxfv" event={"ID":"211ba2ea-2a2c-4f2a-8c5b-100279b73d2f","Type":"ContainerDied","Data":"d822fc17cab811f76ea8972dbdd7e44197d586f7fd3c4f14b22a883042e1fca4"} Jan 26 11:42:35 crc kubenswrapper[4619]: I0126 11:42:35.729527 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgxfv" event={"ID":"211ba2ea-2a2c-4f2a-8c5b-100279b73d2f","Type":"ContainerStarted","Data":"4621e765d60fa3b983602230f4712495bf4ac21a51303e116f4c75b8454ee0b7"} Jan 26 11:42:36 crc kubenswrapper[4619]: I0126 11:42:36.748090 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgxfv" event={"ID":"211ba2ea-2a2c-4f2a-8c5b-100279b73d2f","Type":"ContainerStarted","Data":"c7df8ff2a2ac366cab449a24edde0a0d347dedca0a128f8272dc60b5d271416a"} Jan 26 11:42:38 crc kubenswrapper[4619]: I0126 11:42:38.764910 4619 generic.go:334] "Generic (PLEG): container finished" podID="211ba2ea-2a2c-4f2a-8c5b-100279b73d2f" containerID="c7df8ff2a2ac366cab449a24edde0a0d347dedca0a128f8272dc60b5d271416a" exitCode=0 Jan 26 11:42:38 crc kubenswrapper[4619]: I0126 11:42:38.764977 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgxfv" event={"ID":"211ba2ea-2a2c-4f2a-8c5b-100279b73d2f","Type":"ContainerDied","Data":"c7df8ff2a2ac366cab449a24edde0a0d347dedca0a128f8272dc60b5d271416a"} Jan 26 11:42:40 crc kubenswrapper[4619]: I0126 11:42:40.784058 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgxfv" event={"ID":"211ba2ea-2a2c-4f2a-8c5b-100279b73d2f","Type":"ContainerStarted","Data":"19a6d83a4733bb378fdbdeef7e936241a8050ea639afd8b7e2795efec333d096"} Jan 26 11:42:40 crc kubenswrapper[4619]: I0126 11:42:40.809109 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fgxfv" podStartSLOduration=2.064320808 podStartE2EDuration="6.809089226s" podCreationTimestamp="2026-01-26 11:42:34 +0000 UTC" firstStartedPulling="2026-01-26 11:42:35.731375498 +0000 UTC m=+2854.765416214" lastFinishedPulling="2026-01-26 11:42:40.476143916 +0000 UTC m=+2859.510184632" observedRunningTime="2026-01-26 11:42:40.804211098 +0000 UTC m=+2859.838251814" watchObservedRunningTime="2026-01-26 11:42:40.809089226 +0000 UTC m=+2859.843129942" Jan 26 11:42:41 crc kubenswrapper[4619]: I0126 11:42:41.463915 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-flhr4" Jan 26 11:42:41 crc kubenswrapper[4619]: I0126 11:42:41.520492 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-flhr4" Jan 26 11:42:41 crc kubenswrapper[4619]: I0126 11:42:41.754199 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-flhr4"] Jan 26 11:42:42 crc kubenswrapper[4619]: I0126 11:42:42.805006 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-flhr4" podUID="384e8794-82a4-4388-b885-de4b3078658f" containerName="registry-server" containerID="cri-o://37b723425499da1897ef5a6e40f653a21eb7ce0f1bd551ed9bb3cbb4fb392c01" gracePeriod=2 Jan 26 11:42:43 crc kubenswrapper[4619]: I0126 11:42:43.262349 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-flhr4" Jan 26 11:42:43 crc kubenswrapper[4619]: I0126 11:42:43.411343 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/384e8794-82a4-4388-b885-de4b3078658f-catalog-content\") pod \"384e8794-82a4-4388-b885-de4b3078658f\" (UID: \"384e8794-82a4-4388-b885-de4b3078658f\") " Jan 26 11:42:43 crc kubenswrapper[4619]: I0126 11:42:43.411437 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/384e8794-82a4-4388-b885-de4b3078658f-utilities\") pod \"384e8794-82a4-4388-b885-de4b3078658f\" (UID: \"384e8794-82a4-4388-b885-de4b3078658f\") " Jan 26 11:42:43 crc kubenswrapper[4619]: I0126 11:42:43.411466 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lw5r\" (UniqueName: \"kubernetes.io/projected/384e8794-82a4-4388-b885-de4b3078658f-kube-api-access-6lw5r\") pod \"384e8794-82a4-4388-b885-de4b3078658f\" (UID: \"384e8794-82a4-4388-b885-de4b3078658f\") " Jan 26 11:42:43 crc kubenswrapper[4619]: I0126 11:42:43.412824 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/384e8794-82a4-4388-b885-de4b3078658f-utilities" (OuterVolumeSpecName: "utilities") pod "384e8794-82a4-4388-b885-de4b3078658f" (UID: "384e8794-82a4-4388-b885-de4b3078658f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:42:43 crc kubenswrapper[4619]: I0126 11:42:43.419897 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/384e8794-82a4-4388-b885-de4b3078658f-kube-api-access-6lw5r" (OuterVolumeSpecName: "kube-api-access-6lw5r") pod "384e8794-82a4-4388-b885-de4b3078658f" (UID: "384e8794-82a4-4388-b885-de4b3078658f"). InnerVolumeSpecName "kube-api-access-6lw5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:42:43 crc kubenswrapper[4619]: I0126 11:42:43.514503 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/384e8794-82a4-4388-b885-de4b3078658f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:42:43 crc kubenswrapper[4619]: I0126 11:42:43.514552 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lw5r\" (UniqueName: \"kubernetes.io/projected/384e8794-82a4-4388-b885-de4b3078658f-kube-api-access-6lw5r\") on node \"crc\" DevicePath \"\"" Jan 26 11:42:43 crc kubenswrapper[4619]: I0126 11:42:43.523238 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/384e8794-82a4-4388-b885-de4b3078658f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "384e8794-82a4-4388-b885-de4b3078658f" (UID: "384e8794-82a4-4388-b885-de4b3078658f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:42:43 crc kubenswrapper[4619]: I0126 11:42:43.616061 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/384e8794-82a4-4388-b885-de4b3078658f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:42:43 crc kubenswrapper[4619]: I0126 11:42:43.816657 4619 generic.go:334] "Generic (PLEG): container finished" podID="384e8794-82a4-4388-b885-de4b3078658f" containerID="37b723425499da1897ef5a6e40f653a21eb7ce0f1bd551ed9bb3cbb4fb392c01" exitCode=0 Jan 26 11:42:43 crc kubenswrapper[4619]: I0126 11:42:43.816699 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flhr4" event={"ID":"384e8794-82a4-4388-b885-de4b3078658f","Type":"ContainerDied","Data":"37b723425499da1897ef5a6e40f653a21eb7ce0f1bd551ed9bb3cbb4fb392c01"} Jan 26 11:42:43 crc kubenswrapper[4619]: I0126 11:42:43.816731 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flhr4" event={"ID":"384e8794-82a4-4388-b885-de4b3078658f","Type":"ContainerDied","Data":"ec9214d5c096046f494ebc0f98ae08e5602377e4ffab9059f3b49db288c1ed70"} Jan 26 11:42:43 crc kubenswrapper[4619]: I0126 11:42:43.816731 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-flhr4" Jan 26 11:42:43 crc kubenswrapper[4619]: I0126 11:42:43.816748 4619 scope.go:117] "RemoveContainer" containerID="37b723425499da1897ef5a6e40f653a21eb7ce0f1bd551ed9bb3cbb4fb392c01" Jan 26 11:42:43 crc kubenswrapper[4619]: I0126 11:42:43.854070 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-flhr4"] Jan 26 11:42:43 crc kubenswrapper[4619]: I0126 11:42:43.857765 4619 scope.go:117] "RemoveContainer" containerID="b89a41dcb998bc1063abcaba4a674aded104cd2a3272746455f4087e2c098975" Jan 26 11:42:43 crc kubenswrapper[4619]: I0126 11:42:43.863313 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-flhr4"] Jan 26 11:42:43 crc kubenswrapper[4619]: I0126 11:42:43.884932 4619 scope.go:117] "RemoveContainer" containerID="6fc077aecd356545882ae0c1f6dd37a113bd98ef479dc5573f3839c406dd63d2" Jan 26 11:42:43 crc kubenswrapper[4619]: I0126 11:42:43.942389 4619 scope.go:117] "RemoveContainer" containerID="37b723425499da1897ef5a6e40f653a21eb7ce0f1bd551ed9bb3cbb4fb392c01" Jan 26 11:42:43 crc kubenswrapper[4619]: E0126 11:42:43.943458 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37b723425499da1897ef5a6e40f653a21eb7ce0f1bd551ed9bb3cbb4fb392c01\": container with ID starting with 37b723425499da1897ef5a6e40f653a21eb7ce0f1bd551ed9bb3cbb4fb392c01 not found: ID does not exist" containerID="37b723425499da1897ef5a6e40f653a21eb7ce0f1bd551ed9bb3cbb4fb392c01" Jan 26 11:42:43 crc kubenswrapper[4619]: I0126 11:42:43.943521 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37b723425499da1897ef5a6e40f653a21eb7ce0f1bd551ed9bb3cbb4fb392c01"} err="failed to get container status \"37b723425499da1897ef5a6e40f653a21eb7ce0f1bd551ed9bb3cbb4fb392c01\": rpc error: code = NotFound desc = could not find container \"37b723425499da1897ef5a6e40f653a21eb7ce0f1bd551ed9bb3cbb4fb392c01\": container with ID starting with 37b723425499da1897ef5a6e40f653a21eb7ce0f1bd551ed9bb3cbb4fb392c01 not found: ID does not exist" Jan 26 11:42:43 crc kubenswrapper[4619]: I0126 11:42:43.943558 4619 scope.go:117] "RemoveContainer" containerID="b89a41dcb998bc1063abcaba4a674aded104cd2a3272746455f4087e2c098975" Jan 26 11:42:43 crc kubenswrapper[4619]: E0126 11:42:43.944245 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b89a41dcb998bc1063abcaba4a674aded104cd2a3272746455f4087e2c098975\": container with ID starting with b89a41dcb998bc1063abcaba4a674aded104cd2a3272746455f4087e2c098975 not found: ID does not exist" containerID="b89a41dcb998bc1063abcaba4a674aded104cd2a3272746455f4087e2c098975" Jan 26 11:42:43 crc kubenswrapper[4619]: I0126 11:42:43.944279 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b89a41dcb998bc1063abcaba4a674aded104cd2a3272746455f4087e2c098975"} err="failed to get container status \"b89a41dcb998bc1063abcaba4a674aded104cd2a3272746455f4087e2c098975\": rpc error: code = NotFound desc = could not find container \"b89a41dcb998bc1063abcaba4a674aded104cd2a3272746455f4087e2c098975\": container with ID starting with b89a41dcb998bc1063abcaba4a674aded104cd2a3272746455f4087e2c098975 not found: ID does not exist" Jan 26 11:42:43 crc kubenswrapper[4619]: I0126 11:42:43.944302 4619 scope.go:117] "RemoveContainer" containerID="6fc077aecd356545882ae0c1f6dd37a113bd98ef479dc5573f3839c406dd63d2" Jan 26 11:42:43 crc kubenswrapper[4619]: E0126 11:42:43.944590 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fc077aecd356545882ae0c1f6dd37a113bd98ef479dc5573f3839c406dd63d2\": container with ID starting with 6fc077aecd356545882ae0c1f6dd37a113bd98ef479dc5573f3839c406dd63d2 not found: ID does not exist" containerID="6fc077aecd356545882ae0c1f6dd37a113bd98ef479dc5573f3839c406dd63d2" Jan 26 11:42:43 crc kubenswrapper[4619]: I0126 11:42:43.944642 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fc077aecd356545882ae0c1f6dd37a113bd98ef479dc5573f3839c406dd63d2"} err="failed to get container status \"6fc077aecd356545882ae0c1f6dd37a113bd98ef479dc5573f3839c406dd63d2\": rpc error: code = NotFound desc = could not find container \"6fc077aecd356545882ae0c1f6dd37a113bd98ef479dc5573f3839c406dd63d2\": container with ID starting with 6fc077aecd356545882ae0c1f6dd37a113bd98ef479dc5573f3839c406dd63d2 not found: ID does not exist" Jan 26 11:42:44 crc kubenswrapper[4619]: I0126 11:42:44.703717 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fgxfv" Jan 26 11:42:44 crc kubenswrapper[4619]: I0126 11:42:44.703854 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fgxfv" Jan 26 11:42:44 crc kubenswrapper[4619]: I0126 11:42:44.754860 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fgxfv" Jan 26 11:42:45 crc kubenswrapper[4619]: I0126 11:42:45.275389 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="384e8794-82a4-4388-b885-de4b3078658f" path="/var/lib/kubelet/pods/384e8794-82a4-4388-b885-de4b3078658f/volumes" Jan 26 11:42:54 crc kubenswrapper[4619]: I0126 11:42:54.749892 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fgxfv" Jan 26 11:42:54 crc kubenswrapper[4619]: I0126 11:42:54.801152 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fgxfv"] Jan 26 11:42:54 crc kubenswrapper[4619]: I0126 11:42:54.927772 4619 generic.go:334] "Generic (PLEG): container finished" podID="c6a5b4c8-fd30-49e5-853a-6512124a63ca" containerID="8e8db7589febb96bf3c58f757874e002bfd3dcd716a534fe6bd1b9d3db1942b8" exitCode=0 Jan 26 11:42:54 crc kubenswrapper[4619]: I0126 11:42:54.927823 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq" event={"ID":"c6a5b4c8-fd30-49e5-853a-6512124a63ca","Type":"ContainerDied","Data":"8e8db7589febb96bf3c58f757874e002bfd3dcd716a534fe6bd1b9d3db1942b8"} Jan 26 11:42:54 crc kubenswrapper[4619]: I0126 11:42:54.928009 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fgxfv" podUID="211ba2ea-2a2c-4f2a-8c5b-100279b73d2f" containerName="registry-server" containerID="cri-o://19a6d83a4733bb378fdbdeef7e936241a8050ea639afd8b7e2795efec333d096" gracePeriod=2 Jan 26 11:42:55 crc kubenswrapper[4619]: I0126 11:42:55.385657 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fgxfv" Jan 26 11:42:55 crc kubenswrapper[4619]: I0126 11:42:55.538167 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgcbt\" (UniqueName: \"kubernetes.io/projected/211ba2ea-2a2c-4f2a-8c5b-100279b73d2f-kube-api-access-xgcbt\") pod \"211ba2ea-2a2c-4f2a-8c5b-100279b73d2f\" (UID: \"211ba2ea-2a2c-4f2a-8c5b-100279b73d2f\") " Jan 26 11:42:55 crc kubenswrapper[4619]: I0126 11:42:55.538640 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/211ba2ea-2a2c-4f2a-8c5b-100279b73d2f-utilities\") pod \"211ba2ea-2a2c-4f2a-8c5b-100279b73d2f\" (UID: \"211ba2ea-2a2c-4f2a-8c5b-100279b73d2f\") " Jan 26 11:42:55 crc kubenswrapper[4619]: I0126 11:42:55.538833 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/211ba2ea-2a2c-4f2a-8c5b-100279b73d2f-catalog-content\") pod \"211ba2ea-2a2c-4f2a-8c5b-100279b73d2f\" (UID: \"211ba2ea-2a2c-4f2a-8c5b-100279b73d2f\") " Jan 26 11:42:55 crc kubenswrapper[4619]: I0126 11:42:55.539604 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/211ba2ea-2a2c-4f2a-8c5b-100279b73d2f-utilities" (OuterVolumeSpecName: "utilities") pod "211ba2ea-2a2c-4f2a-8c5b-100279b73d2f" (UID: "211ba2ea-2a2c-4f2a-8c5b-100279b73d2f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:42:55 crc kubenswrapper[4619]: I0126 11:42:55.553793 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/211ba2ea-2a2c-4f2a-8c5b-100279b73d2f-kube-api-access-xgcbt" (OuterVolumeSpecName: "kube-api-access-xgcbt") pod "211ba2ea-2a2c-4f2a-8c5b-100279b73d2f" (UID: "211ba2ea-2a2c-4f2a-8c5b-100279b73d2f"). InnerVolumeSpecName "kube-api-access-xgcbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:42:55 crc kubenswrapper[4619]: I0126 11:42:55.590838 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/211ba2ea-2a2c-4f2a-8c5b-100279b73d2f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "211ba2ea-2a2c-4f2a-8c5b-100279b73d2f" (UID: "211ba2ea-2a2c-4f2a-8c5b-100279b73d2f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:42:55 crc kubenswrapper[4619]: I0126 11:42:55.641747 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/211ba2ea-2a2c-4f2a-8c5b-100279b73d2f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:42:55 crc kubenswrapper[4619]: I0126 11:42:55.641786 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/211ba2ea-2a2c-4f2a-8c5b-100279b73d2f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:42:55 crc kubenswrapper[4619]: I0126 11:42:55.641800 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgcbt\" (UniqueName: \"kubernetes.io/projected/211ba2ea-2a2c-4f2a-8c5b-100279b73d2f-kube-api-access-xgcbt\") on node \"crc\" DevicePath \"\"" Jan 26 11:42:55 crc kubenswrapper[4619]: I0126 11:42:55.958264 4619 generic.go:334] "Generic (PLEG): container finished" podID="211ba2ea-2a2c-4f2a-8c5b-100279b73d2f" containerID="19a6d83a4733bb378fdbdeef7e936241a8050ea639afd8b7e2795efec333d096" exitCode=0 Jan 26 11:42:55 crc kubenswrapper[4619]: I0126 11:42:55.958646 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fgxfv" Jan 26 11:42:55 crc kubenswrapper[4619]: I0126 11:42:55.958808 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgxfv" event={"ID":"211ba2ea-2a2c-4f2a-8c5b-100279b73d2f","Type":"ContainerDied","Data":"19a6d83a4733bb378fdbdeef7e936241a8050ea639afd8b7e2795efec333d096"} Jan 26 11:42:55 crc kubenswrapper[4619]: I0126 11:42:55.958873 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgxfv" event={"ID":"211ba2ea-2a2c-4f2a-8c5b-100279b73d2f","Type":"ContainerDied","Data":"4621e765d60fa3b983602230f4712495bf4ac21a51303e116f4c75b8454ee0b7"} Jan 26 11:42:55 crc kubenswrapper[4619]: I0126 11:42:55.958911 4619 scope.go:117] "RemoveContainer" containerID="19a6d83a4733bb378fdbdeef7e936241a8050ea639afd8b7e2795efec333d096" Jan 26 11:42:55 crc kubenswrapper[4619]: I0126 11:42:55.998533 4619 scope.go:117] "RemoveContainer" containerID="c7df8ff2a2ac366cab449a24edde0a0d347dedca0a128f8272dc60b5d271416a" Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.004733 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fgxfv"] Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.010900 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fgxfv"] Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.037196 4619 scope.go:117] "RemoveContainer" containerID="d822fc17cab811f76ea8972dbdd7e44197d586f7fd3c4f14b22a883042e1fca4" Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.064876 4619 scope.go:117] "RemoveContainer" containerID="19a6d83a4733bb378fdbdeef7e936241a8050ea639afd8b7e2795efec333d096" Jan 26 11:42:56 crc kubenswrapper[4619]: E0126 11:42:56.065222 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19a6d83a4733bb378fdbdeef7e936241a8050ea639afd8b7e2795efec333d096\": container with ID starting with 19a6d83a4733bb378fdbdeef7e936241a8050ea639afd8b7e2795efec333d096 not found: ID does not exist" containerID="19a6d83a4733bb378fdbdeef7e936241a8050ea639afd8b7e2795efec333d096" Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.065256 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19a6d83a4733bb378fdbdeef7e936241a8050ea639afd8b7e2795efec333d096"} err="failed to get container status \"19a6d83a4733bb378fdbdeef7e936241a8050ea639afd8b7e2795efec333d096\": rpc error: code = NotFound desc = could not find container \"19a6d83a4733bb378fdbdeef7e936241a8050ea639afd8b7e2795efec333d096\": container with ID starting with 19a6d83a4733bb378fdbdeef7e936241a8050ea639afd8b7e2795efec333d096 not found: ID does not exist" Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.065277 4619 scope.go:117] "RemoveContainer" containerID="c7df8ff2a2ac366cab449a24edde0a0d347dedca0a128f8272dc60b5d271416a" Jan 26 11:42:56 crc kubenswrapper[4619]: E0126 11:42:56.065495 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7df8ff2a2ac366cab449a24edde0a0d347dedca0a128f8272dc60b5d271416a\": container with ID starting with c7df8ff2a2ac366cab449a24edde0a0d347dedca0a128f8272dc60b5d271416a not found: ID does not exist" containerID="c7df8ff2a2ac366cab449a24edde0a0d347dedca0a128f8272dc60b5d271416a" Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.065529 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7df8ff2a2ac366cab449a24edde0a0d347dedca0a128f8272dc60b5d271416a"} err="failed to get container status \"c7df8ff2a2ac366cab449a24edde0a0d347dedca0a128f8272dc60b5d271416a\": rpc error: code = NotFound desc = could not find container \"c7df8ff2a2ac366cab449a24edde0a0d347dedca0a128f8272dc60b5d271416a\": container with ID starting with c7df8ff2a2ac366cab449a24edde0a0d347dedca0a128f8272dc60b5d271416a not found: ID does not exist" Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.065569 4619 scope.go:117] "RemoveContainer" containerID="d822fc17cab811f76ea8972dbdd7e44197d586f7fd3c4f14b22a883042e1fca4" Jan 26 11:42:56 crc kubenswrapper[4619]: E0126 11:42:56.065926 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d822fc17cab811f76ea8972dbdd7e44197d586f7fd3c4f14b22a883042e1fca4\": container with ID starting with d822fc17cab811f76ea8972dbdd7e44197d586f7fd3c4f14b22a883042e1fca4 not found: ID does not exist" containerID="d822fc17cab811f76ea8972dbdd7e44197d586f7fd3c4f14b22a883042e1fca4" Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.066117 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d822fc17cab811f76ea8972dbdd7e44197d586f7fd3c4f14b22a883042e1fca4"} err="failed to get container status \"d822fc17cab811f76ea8972dbdd7e44197d586f7fd3c4f14b22a883042e1fca4\": rpc error: code = NotFound desc = could not find container \"d822fc17cab811f76ea8972dbdd7e44197d586f7fd3c4f14b22a883042e1fca4\": container with ID starting with d822fc17cab811f76ea8972dbdd7e44197d586f7fd3c4f14b22a883042e1fca4 not found: ID does not exist" Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.412556 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq" Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.569609 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-ceilometer-compute-config-data-2\") pod \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\" (UID: \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\") " Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.569733 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-inventory\") pod \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\" (UID: \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\") " Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.569756 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-telemetry-combined-ca-bundle\") pod \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\" (UID: \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\") " Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.569883 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-ceilometer-compute-config-data-0\") pod \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\" (UID: \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\") " Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.569912 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-ssh-key-openstack-edpm-ipam\") pod \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\" (UID: \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\") " Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.569949 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjcht\" (UniqueName: \"kubernetes.io/projected/c6a5b4c8-fd30-49e5-853a-6512124a63ca-kube-api-access-fjcht\") pod \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\" (UID: \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\") " Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.570097 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-ceilometer-compute-config-data-1\") pod \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\" (UID: \"c6a5b4c8-fd30-49e5-853a-6512124a63ca\") " Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.584936 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6a5b4c8-fd30-49e5-853a-6512124a63ca-kube-api-access-fjcht" (OuterVolumeSpecName: "kube-api-access-fjcht") pod "c6a5b4c8-fd30-49e5-853a-6512124a63ca" (UID: "c6a5b4c8-fd30-49e5-853a-6512124a63ca"). InnerVolumeSpecName "kube-api-access-fjcht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.585370 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "c6a5b4c8-fd30-49e5-853a-6512124a63ca" (UID: "c6a5b4c8-fd30-49e5-853a-6512124a63ca"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.600970 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "c6a5b4c8-fd30-49e5-853a-6512124a63ca" (UID: "c6a5b4c8-fd30-49e5-853a-6512124a63ca"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.601887 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-inventory" (OuterVolumeSpecName: "inventory") pod "c6a5b4c8-fd30-49e5-853a-6512124a63ca" (UID: "c6a5b4c8-fd30-49e5-853a-6512124a63ca"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.602837 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c6a5b4c8-fd30-49e5-853a-6512124a63ca" (UID: "c6a5b4c8-fd30-49e5-853a-6512124a63ca"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.606076 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "c6a5b4c8-fd30-49e5-853a-6512124a63ca" (UID: "c6a5b4c8-fd30-49e5-853a-6512124a63ca"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.614386 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "c6a5b4c8-fd30-49e5-853a-6512124a63ca" (UID: "c6a5b4c8-fd30-49e5-853a-6512124a63ca"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.672311 4619 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.672362 4619 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.672381 4619 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.672399 4619 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.672417 4619 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.672432 4619 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6a5b4c8-fd30-49e5-853a-6512124a63ca-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.672447 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjcht\" (UniqueName: \"kubernetes.io/projected/c6a5b4c8-fd30-49e5-853a-6512124a63ca-kube-api-access-fjcht\") on node \"crc\" DevicePath \"\"" Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.974050 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq" event={"ID":"c6a5b4c8-fd30-49e5-853a-6512124a63ca","Type":"ContainerDied","Data":"1966ce2d129e46b43c0cfdc98d4c0993745522ff7bc43ccc8a8e10f45f9980f9"} Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.974101 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1966ce2d129e46b43c0cfdc98d4c0993745522ff7bc43ccc8a8e10f45f9980f9" Jan 26 11:42:56 crc kubenswrapper[4619]: I0126 11:42:56.974166 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq" Jan 26 11:42:57 crc kubenswrapper[4619]: I0126 11:42:57.271524 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="211ba2ea-2a2c-4f2a-8c5b-100279b73d2f" path="/var/lib/kubelet/pods/211ba2ea-2a2c-4f2a-8c5b-100279b73d2f/volumes" Jan 26 11:43:44 crc kubenswrapper[4619]: I0126 11:43:44.234807 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:43:44 crc kubenswrapper[4619]: I0126 11:43:44.235386 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.077087 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 26 11:44:01 crc kubenswrapper[4619]: E0126 11:44:01.078482 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a5b4c8-fd30-49e5-853a-6512124a63ca" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.078504 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a5b4c8-fd30-49e5-853a-6512124a63ca" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 26 11:44:01 crc kubenswrapper[4619]: E0126 11:44:01.078528 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="384e8794-82a4-4388-b885-de4b3078658f" containerName="registry-server" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.078537 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="384e8794-82a4-4388-b885-de4b3078658f" containerName="registry-server" Jan 26 11:44:01 crc kubenswrapper[4619]: E0126 11:44:01.078565 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="211ba2ea-2a2c-4f2a-8c5b-100279b73d2f" containerName="extract-utilities" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.078574 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="211ba2ea-2a2c-4f2a-8c5b-100279b73d2f" containerName="extract-utilities" Jan 26 11:44:01 crc kubenswrapper[4619]: E0126 11:44:01.078612 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="384e8794-82a4-4388-b885-de4b3078658f" containerName="extract-utilities" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.081712 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="384e8794-82a4-4388-b885-de4b3078658f" containerName="extract-utilities" Jan 26 11:44:01 crc kubenswrapper[4619]: E0126 11:44:01.081778 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="211ba2ea-2a2c-4f2a-8c5b-100279b73d2f" containerName="extract-content" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.081790 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="211ba2ea-2a2c-4f2a-8c5b-100279b73d2f" containerName="extract-content" Jan 26 11:44:01 crc kubenswrapper[4619]: E0126 11:44:01.081827 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="211ba2ea-2a2c-4f2a-8c5b-100279b73d2f" containerName="registry-server" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.081863 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="211ba2ea-2a2c-4f2a-8c5b-100279b73d2f" containerName="registry-server" Jan 26 11:44:01 crc kubenswrapper[4619]: E0126 11:44:01.081905 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="384e8794-82a4-4388-b885-de4b3078658f" containerName="extract-content" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.081920 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="384e8794-82a4-4388-b885-de4b3078658f" containerName="extract-content" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.082578 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="211ba2ea-2a2c-4f2a-8c5b-100279b73d2f" containerName="registry-server" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.082638 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a5b4c8-fd30-49e5-853a-6512124a63ca" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.082662 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="384e8794-82a4-4388-b885-de4b3078658f" containerName="registry-server" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.083700 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.091352 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.092305 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-gwxgg" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.092577 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.095839 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.097600 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/857314c6-b4dd-4f76-8a06-2bf24b654fe3-config-data\") pod \"tempest-tests-tempest\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " pod="openstack/tempest-tests-tempest" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.097752 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/857314c6-b4dd-4f76-8a06-2bf24b654fe3-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " pod="openstack/tempest-tests-tempest" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.097825 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/857314c6-b4dd-4f76-8a06-2bf24b654fe3-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " pod="openstack/tempest-tests-tempest" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.121674 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.200004 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " pod="openstack/tempest-tests-tempest" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.200282 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/857314c6-b4dd-4f76-8a06-2bf24b654fe3-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " pod="openstack/tempest-tests-tempest" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.200315 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/857314c6-b4dd-4f76-8a06-2bf24b654fe3-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " pod="openstack/tempest-tests-tempest" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.200351 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/857314c6-b4dd-4f76-8a06-2bf24b654fe3-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " pod="openstack/tempest-tests-tempest" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.200394 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn4ch\" (UniqueName: \"kubernetes.io/projected/857314c6-b4dd-4f76-8a06-2bf24b654fe3-kube-api-access-xn4ch\") pod \"tempest-tests-tempest\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " pod="openstack/tempest-tests-tempest" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.200422 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/857314c6-b4dd-4f76-8a06-2bf24b654fe3-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " pod="openstack/tempest-tests-tempest" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.200460 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/857314c6-b4dd-4f76-8a06-2bf24b654fe3-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " pod="openstack/tempest-tests-tempest" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.200530 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/857314c6-b4dd-4f76-8a06-2bf24b654fe3-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " pod="openstack/tempest-tests-tempest" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.200565 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/857314c6-b4dd-4f76-8a06-2bf24b654fe3-config-data\") pod \"tempest-tests-tempest\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " pod="openstack/tempest-tests-tempest" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.201978 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/857314c6-b4dd-4f76-8a06-2bf24b654fe3-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " pod="openstack/tempest-tests-tempest" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.202445 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/857314c6-b4dd-4f76-8a06-2bf24b654fe3-config-data\") pod \"tempest-tests-tempest\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " pod="openstack/tempest-tests-tempest" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.213830 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/857314c6-b4dd-4f76-8a06-2bf24b654fe3-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " pod="openstack/tempest-tests-tempest" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.302661 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/857314c6-b4dd-4f76-8a06-2bf24b654fe3-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " pod="openstack/tempest-tests-tempest" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.302745 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/857314c6-b4dd-4f76-8a06-2bf24b654fe3-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " pod="openstack/tempest-tests-tempest" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.302817 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " pod="openstack/tempest-tests-tempest" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.302863 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/857314c6-b4dd-4f76-8a06-2bf24b654fe3-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " pod="openstack/tempest-tests-tempest" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.302915 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn4ch\" (UniqueName: \"kubernetes.io/projected/857314c6-b4dd-4f76-8a06-2bf24b654fe3-kube-api-access-xn4ch\") pod \"tempest-tests-tempest\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " pod="openstack/tempest-tests-tempest" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.302933 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/857314c6-b4dd-4f76-8a06-2bf24b654fe3-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " pod="openstack/tempest-tests-tempest" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.303845 4619 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.303909 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/857314c6-b4dd-4f76-8a06-2bf24b654fe3-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " pod="openstack/tempest-tests-tempest" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.304775 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.304823 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/857314c6-b4dd-4f76-8a06-2bf24b654fe3-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " pod="openstack/tempest-tests-tempest" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.308520 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/857314c6-b4dd-4f76-8a06-2bf24b654fe3-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " pod="openstack/tempest-tests-tempest" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.317569 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/857314c6-b4dd-4f76-8a06-2bf24b654fe3-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " pod="openstack/tempest-tests-tempest" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.319346 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn4ch\" (UniqueName: \"kubernetes.io/projected/857314c6-b4dd-4f76-8a06-2bf24b654fe3-kube-api-access-xn4ch\") pod \"tempest-tests-tempest\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " pod="openstack/tempest-tests-tempest" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.332009 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " pod="openstack/tempest-tests-tempest" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.433970 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-gwxgg" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.443379 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 11:44:01 crc kubenswrapper[4619]: I0126 11:44:01.875964 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 26 11:44:02 crc kubenswrapper[4619]: I0126 11:44:02.521106 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"857314c6-b4dd-4f76-8a06-2bf24b654fe3","Type":"ContainerStarted","Data":"c7b1aebdf79597050a54b3fa150a7735a905de00edbf627de91905d2eac64f0d"} Jan 26 11:44:14 crc kubenswrapper[4619]: I0126 11:44:14.238604 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:44:14 crc kubenswrapper[4619]: I0126 11:44:14.239283 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:44:40 crc kubenswrapper[4619]: E0126 11:44:40.792505 4619 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 26 11:44:40 crc kubenswrapper[4619]: E0126 11:44:40.794305 4619 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xn4ch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(857314c6-b4dd-4f76-8a06-2bf24b654fe3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:44:40 crc kubenswrapper[4619]: E0126 11:44:40.795708 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="857314c6-b4dd-4f76-8a06-2bf24b654fe3" Jan 26 11:44:40 crc kubenswrapper[4619]: E0126 11:44:40.901066 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="857314c6-b4dd-4f76-8a06-2bf24b654fe3" Jan 26 11:44:44 crc kubenswrapper[4619]: I0126 11:44:44.233919 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:44:44 crc kubenswrapper[4619]: I0126 11:44:44.234250 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:44:44 crc kubenswrapper[4619]: I0126 11:44:44.234290 4619 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" Jan 26 11:44:44 crc kubenswrapper[4619]: I0126 11:44:44.234825 4619 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d"} pod="openshift-machine-config-operator/machine-config-daemon-28hd4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 11:44:44 crc kubenswrapper[4619]: I0126 11:44:44.234878 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" containerID="cri-o://8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d" gracePeriod=600 Jan 26 11:44:44 crc kubenswrapper[4619]: E0126 11:44:44.365959 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:44:44 crc kubenswrapper[4619]: E0126 11:44:44.388159 4619 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf33a41bb_6406_4c73_8024_4acd72817832.slice/crio-conmon-8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf33a41bb_6406_4c73_8024_4acd72817832.slice/crio-8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d.scope\": RecentStats: unable to find data in memory cache]" Jan 26 11:44:44 crc kubenswrapper[4619]: I0126 11:44:44.935684 4619 generic.go:334] "Generic (PLEG): container finished" podID="f33a41bb-6406-4c73-8024-4acd72817832" containerID="8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d" exitCode=0 Jan 26 11:44:44 crc kubenswrapper[4619]: I0126 11:44:44.935750 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerDied","Data":"8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d"} Jan 26 11:44:44 crc kubenswrapper[4619]: I0126 11:44:44.936020 4619 scope.go:117] "RemoveContainer" containerID="d028f92205bfbf0fdeda46223cedf4233a9bef0dd0215cc4b9182d465ea565f9" Jan 26 11:44:44 crc kubenswrapper[4619]: I0126 11:44:44.936596 4619 scope.go:117] "RemoveContainer" containerID="8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d" Jan 26 11:44:44 crc kubenswrapper[4619]: E0126 11:44:44.937108 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:44:54 crc kubenswrapper[4619]: I0126 11:44:54.728363 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 26 11:44:56 crc kubenswrapper[4619]: I0126 11:44:56.053418 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"857314c6-b4dd-4f76-8a06-2bf24b654fe3","Type":"ContainerStarted","Data":"1164105e871ceffce67902b1056b6f1355743734ed19f08198390474cad42ec1"} Jan 26 11:44:56 crc kubenswrapper[4619]: I0126 11:44:56.081207 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.247667366 podStartE2EDuration="56.081189555s" podCreationTimestamp="2026-01-26 11:44:00 +0000 UTC" firstStartedPulling="2026-01-26 11:44:01.884409769 +0000 UTC m=+2940.918450485" lastFinishedPulling="2026-01-26 11:44:54.717931958 +0000 UTC m=+2993.751972674" observedRunningTime="2026-01-26 11:44:56.074543001 +0000 UTC m=+2995.108583747" watchObservedRunningTime="2026-01-26 11:44:56.081189555 +0000 UTC m=+2995.115230271" Jan 26 11:44:57 crc kubenswrapper[4619]: I0126 11:44:57.261386 4619 scope.go:117] "RemoveContainer" containerID="8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d" Jan 26 11:44:57 crc kubenswrapper[4619]: E0126 11:44:57.261835 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:45:00 crc kubenswrapper[4619]: I0126 11:45:00.154679 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490465-wdvkp"] Jan 26 11:45:00 crc kubenswrapper[4619]: I0126 11:45:00.156318 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-wdvkp" Jan 26 11:45:00 crc kubenswrapper[4619]: I0126 11:45:00.158518 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 11:45:00 crc kubenswrapper[4619]: I0126 11:45:00.173793 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 11:45:00 crc kubenswrapper[4619]: I0126 11:45:00.193739 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490465-wdvkp"] Jan 26 11:45:00 crc kubenswrapper[4619]: I0126 11:45:00.241537 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f23c1ed-eab6-4e9d-8522-483373c72316-secret-volume\") pod \"collect-profiles-29490465-wdvkp\" (UID: \"8f23c1ed-eab6-4e9d-8522-483373c72316\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-wdvkp" Jan 26 11:45:00 crc kubenswrapper[4619]: I0126 11:45:00.241907 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f23c1ed-eab6-4e9d-8522-483373c72316-config-volume\") pod \"collect-profiles-29490465-wdvkp\" (UID: \"8f23c1ed-eab6-4e9d-8522-483373c72316\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-wdvkp" Jan 26 11:45:00 crc kubenswrapper[4619]: I0126 11:45:00.241984 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvp44\" (UniqueName: \"kubernetes.io/projected/8f23c1ed-eab6-4e9d-8522-483373c72316-kube-api-access-zvp44\") pod \"collect-profiles-29490465-wdvkp\" (UID: \"8f23c1ed-eab6-4e9d-8522-483373c72316\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-wdvkp" Jan 26 11:45:00 crc kubenswrapper[4619]: I0126 11:45:00.343799 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f23c1ed-eab6-4e9d-8522-483373c72316-secret-volume\") pod \"collect-profiles-29490465-wdvkp\" (UID: \"8f23c1ed-eab6-4e9d-8522-483373c72316\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-wdvkp" Jan 26 11:45:00 crc kubenswrapper[4619]: I0126 11:45:00.344095 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f23c1ed-eab6-4e9d-8522-483373c72316-config-volume\") pod \"collect-profiles-29490465-wdvkp\" (UID: \"8f23c1ed-eab6-4e9d-8522-483373c72316\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-wdvkp" Jan 26 11:45:00 crc kubenswrapper[4619]: I0126 11:45:00.344217 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvp44\" (UniqueName: \"kubernetes.io/projected/8f23c1ed-eab6-4e9d-8522-483373c72316-kube-api-access-zvp44\") pod \"collect-profiles-29490465-wdvkp\" (UID: \"8f23c1ed-eab6-4e9d-8522-483373c72316\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-wdvkp" Jan 26 11:45:00 crc kubenswrapper[4619]: I0126 11:45:00.345403 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f23c1ed-eab6-4e9d-8522-483373c72316-config-volume\") pod \"collect-profiles-29490465-wdvkp\" (UID: \"8f23c1ed-eab6-4e9d-8522-483373c72316\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-wdvkp" Jan 26 11:45:00 crc kubenswrapper[4619]: I0126 11:45:00.351848 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f23c1ed-eab6-4e9d-8522-483373c72316-secret-volume\") pod \"collect-profiles-29490465-wdvkp\" (UID: \"8f23c1ed-eab6-4e9d-8522-483373c72316\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-wdvkp" Jan 26 11:45:00 crc kubenswrapper[4619]: I0126 11:45:00.371013 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvp44\" (UniqueName: \"kubernetes.io/projected/8f23c1ed-eab6-4e9d-8522-483373c72316-kube-api-access-zvp44\") pod \"collect-profiles-29490465-wdvkp\" (UID: \"8f23c1ed-eab6-4e9d-8522-483373c72316\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-wdvkp" Jan 26 11:45:00 crc kubenswrapper[4619]: I0126 11:45:00.512815 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-wdvkp" Jan 26 11:45:01 crc kubenswrapper[4619]: I0126 11:45:01.118339 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490465-wdvkp"] Jan 26 11:45:02 crc kubenswrapper[4619]: I0126 11:45:02.104852 4619 generic.go:334] "Generic (PLEG): container finished" podID="8f23c1ed-eab6-4e9d-8522-483373c72316" containerID="a6f831194ede9b237cd5fe18c69801726185bd8b277f6e93fa0c113072e36ba8" exitCode=0 Jan 26 11:45:02 crc kubenswrapper[4619]: I0126 11:45:02.105145 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-wdvkp" event={"ID":"8f23c1ed-eab6-4e9d-8522-483373c72316","Type":"ContainerDied","Data":"a6f831194ede9b237cd5fe18c69801726185bd8b277f6e93fa0c113072e36ba8"} Jan 26 11:45:02 crc kubenswrapper[4619]: I0126 11:45:02.105171 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-wdvkp" event={"ID":"8f23c1ed-eab6-4e9d-8522-483373c72316","Type":"ContainerStarted","Data":"c404b56f24ae60574adef71f3fb86c7618954a68dd1805632c8b9380e94bd80e"} Jan 26 11:45:03 crc kubenswrapper[4619]: I0126 11:45:03.439861 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-wdvkp" Jan 26 11:45:03 crc kubenswrapper[4619]: I0126 11:45:03.612986 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvp44\" (UniqueName: \"kubernetes.io/projected/8f23c1ed-eab6-4e9d-8522-483373c72316-kube-api-access-zvp44\") pod \"8f23c1ed-eab6-4e9d-8522-483373c72316\" (UID: \"8f23c1ed-eab6-4e9d-8522-483373c72316\") " Jan 26 11:45:03 crc kubenswrapper[4619]: I0126 11:45:03.613203 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f23c1ed-eab6-4e9d-8522-483373c72316-config-volume\") pod \"8f23c1ed-eab6-4e9d-8522-483373c72316\" (UID: \"8f23c1ed-eab6-4e9d-8522-483373c72316\") " Jan 26 11:45:03 crc kubenswrapper[4619]: I0126 11:45:03.613251 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f23c1ed-eab6-4e9d-8522-483373c72316-secret-volume\") pod \"8f23c1ed-eab6-4e9d-8522-483373c72316\" (UID: \"8f23c1ed-eab6-4e9d-8522-483373c72316\") " Jan 26 11:45:03 crc kubenswrapper[4619]: I0126 11:45:03.613797 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f23c1ed-eab6-4e9d-8522-483373c72316-config-volume" (OuterVolumeSpecName: "config-volume") pod "8f23c1ed-eab6-4e9d-8522-483373c72316" (UID: "8f23c1ed-eab6-4e9d-8522-483373c72316"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:45:03 crc kubenswrapper[4619]: I0126 11:45:03.613992 4619 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f23c1ed-eab6-4e9d-8522-483373c72316-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 11:45:03 crc kubenswrapper[4619]: I0126 11:45:03.619340 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f23c1ed-eab6-4e9d-8522-483373c72316-kube-api-access-zvp44" (OuterVolumeSpecName: "kube-api-access-zvp44") pod "8f23c1ed-eab6-4e9d-8522-483373c72316" (UID: "8f23c1ed-eab6-4e9d-8522-483373c72316"). InnerVolumeSpecName "kube-api-access-zvp44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:45:03 crc kubenswrapper[4619]: I0126 11:45:03.625818 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f23c1ed-eab6-4e9d-8522-483373c72316-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8f23c1ed-eab6-4e9d-8522-483373c72316" (UID: "8f23c1ed-eab6-4e9d-8522-483373c72316"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:45:03 crc kubenswrapper[4619]: I0126 11:45:03.716237 4619 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f23c1ed-eab6-4e9d-8522-483373c72316-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 11:45:03 crc kubenswrapper[4619]: I0126 11:45:03.716275 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvp44\" (UniqueName: \"kubernetes.io/projected/8f23c1ed-eab6-4e9d-8522-483373c72316-kube-api-access-zvp44\") on node \"crc\" DevicePath \"\"" Jan 26 11:45:04 crc kubenswrapper[4619]: I0126 11:45:04.126783 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-wdvkp" event={"ID":"8f23c1ed-eab6-4e9d-8522-483373c72316","Type":"ContainerDied","Data":"c404b56f24ae60574adef71f3fb86c7618954a68dd1805632c8b9380e94bd80e"} Jan 26 11:45:04 crc kubenswrapper[4619]: I0126 11:45:04.126830 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c404b56f24ae60574adef71f3fb86c7618954a68dd1805632c8b9380e94bd80e" Jan 26 11:45:04 crc kubenswrapper[4619]: I0126 11:45:04.126895 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-wdvkp" Jan 26 11:45:04 crc kubenswrapper[4619]: I0126 11:45:04.518105 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490420-7xdt2"] Jan 26 11:45:04 crc kubenswrapper[4619]: I0126 11:45:04.526165 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490420-7xdt2"] Jan 26 11:45:05 crc kubenswrapper[4619]: I0126 11:45:05.271809 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9985263d-2046-4641-b6cb-235bc8403d32" path="/var/lib/kubelet/pods/9985263d-2046-4641-b6cb-235bc8403d32/volumes" Jan 26 11:45:09 crc kubenswrapper[4619]: I0126 11:45:09.260768 4619 scope.go:117] "RemoveContainer" containerID="8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d" Jan 26 11:45:09 crc kubenswrapper[4619]: E0126 11:45:09.261330 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:45:22 crc kubenswrapper[4619]: I0126 11:45:22.260867 4619 scope.go:117] "RemoveContainer" containerID="8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d" Jan 26 11:45:22 crc kubenswrapper[4619]: E0126 11:45:22.262635 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:45:33 crc kubenswrapper[4619]: I0126 11:45:33.098352 4619 scope.go:117] "RemoveContainer" containerID="b6e66f2075e14cf18463c0dd36480aef651f2ea230914606076060d9762e255d" Jan 26 11:45:37 crc kubenswrapper[4619]: I0126 11:45:37.261217 4619 scope.go:117] "RemoveContainer" containerID="8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d" Jan 26 11:45:37 crc kubenswrapper[4619]: E0126 11:45:37.262302 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:45:48 crc kubenswrapper[4619]: I0126 11:45:48.261294 4619 scope.go:117] "RemoveContainer" containerID="8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d" Jan 26 11:45:48 crc kubenswrapper[4619]: E0126 11:45:48.262018 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:46:00 crc kubenswrapper[4619]: I0126 11:46:00.261236 4619 scope.go:117] "RemoveContainer" containerID="8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d" Jan 26 11:46:00 crc kubenswrapper[4619]: E0126 11:46:00.262204 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:46:14 crc kubenswrapper[4619]: I0126 11:46:14.261229 4619 scope.go:117] "RemoveContainer" containerID="8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d" Jan 26 11:46:14 crc kubenswrapper[4619]: E0126 11:46:14.262026 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:46:25 crc kubenswrapper[4619]: I0126 11:46:25.261830 4619 scope.go:117] "RemoveContainer" containerID="8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d" Jan 26 11:46:25 crc kubenswrapper[4619]: E0126 11:46:25.262644 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:46:39 crc kubenswrapper[4619]: I0126 11:46:39.281519 4619 scope.go:117] "RemoveContainer" containerID="8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d" Jan 26 11:46:39 crc kubenswrapper[4619]: E0126 11:46:39.283227 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:46:51 crc kubenswrapper[4619]: I0126 11:46:51.272120 4619 scope.go:117] "RemoveContainer" containerID="8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d" Jan 26 11:46:51 crc kubenswrapper[4619]: E0126 11:46:51.272921 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:47:02 crc kubenswrapper[4619]: I0126 11:47:02.261526 4619 scope.go:117] "RemoveContainer" containerID="8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d" Jan 26 11:47:02 crc kubenswrapper[4619]: E0126 11:47:02.262143 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:47:17 crc kubenswrapper[4619]: I0126 11:47:17.261415 4619 scope.go:117] "RemoveContainer" containerID="8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d" Jan 26 11:47:17 crc kubenswrapper[4619]: E0126 11:47:17.262140 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:47:29 crc kubenswrapper[4619]: I0126 11:47:29.261393 4619 scope.go:117] "RemoveContainer" containerID="8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d" Jan 26 11:47:29 crc kubenswrapper[4619]: E0126 11:47:29.262351 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:47:33 crc kubenswrapper[4619]: I0126 11:47:33.176757 4619 scope.go:117] "RemoveContainer" containerID="53ea0c352366bfe4f34e725c2f9d80db299fff7c4d13e28f6b39d55164bdbb94" Jan 26 11:47:33 crc kubenswrapper[4619]: I0126 11:47:33.212640 4619 scope.go:117] "RemoveContainer" containerID="f3ac265eea696f606675fcdd94d8b66319a31f03da76e785f622a3e2fccf32fc" Jan 26 11:47:33 crc kubenswrapper[4619]: I0126 11:47:33.262602 4619 scope.go:117] "RemoveContainer" containerID="fd18d7ea2a08507faa3e955a43169bf9d2c0cde94f9b9b6988788a3d167311cf" Jan 26 11:47:44 crc kubenswrapper[4619]: I0126 11:47:44.261259 4619 scope.go:117] "RemoveContainer" containerID="8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d" Jan 26 11:47:44 crc kubenswrapper[4619]: E0126 11:47:44.261943 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:47:56 crc kubenswrapper[4619]: I0126 11:47:56.262070 4619 scope.go:117] "RemoveContainer" containerID="8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d" Jan 26 11:47:56 crc kubenswrapper[4619]: E0126 11:47:56.264366 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:48:10 crc kubenswrapper[4619]: I0126 11:48:10.261913 4619 scope.go:117] "RemoveContainer" containerID="8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d" Jan 26 11:48:10 crc kubenswrapper[4619]: E0126 11:48:10.262753 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:48:22 crc kubenswrapper[4619]: I0126 11:48:22.261500 4619 scope.go:117] "RemoveContainer" containerID="8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d" Jan 26 11:48:22 crc kubenswrapper[4619]: E0126 11:48:22.262180 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:48:36 crc kubenswrapper[4619]: I0126 11:48:36.261027 4619 scope.go:117] "RemoveContainer" containerID="8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d" Jan 26 11:48:36 crc kubenswrapper[4619]: E0126 11:48:36.261759 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:48:47 crc kubenswrapper[4619]: I0126 11:48:47.261477 4619 scope.go:117] "RemoveContainer" containerID="8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d" Jan 26 11:48:47 crc kubenswrapper[4619]: E0126 11:48:47.262185 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:49:01 crc kubenswrapper[4619]: I0126 11:49:01.271509 4619 scope.go:117] "RemoveContainer" containerID="8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d" Jan 26 11:49:01 crc kubenswrapper[4619]: E0126 11:49:01.279111 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:49:12 crc kubenswrapper[4619]: I0126 11:49:12.261494 4619 scope.go:117] "RemoveContainer" containerID="8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d" Jan 26 11:49:12 crc kubenswrapper[4619]: E0126 11:49:12.262177 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:49:25 crc kubenswrapper[4619]: I0126 11:49:25.262690 4619 scope.go:117] "RemoveContainer" containerID="8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d" Jan 26 11:49:25 crc kubenswrapper[4619]: E0126 11:49:25.263581 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:49:39 crc kubenswrapper[4619]: I0126 11:49:39.261419 4619 scope.go:117] "RemoveContainer" containerID="8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d" Jan 26 11:49:39 crc kubenswrapper[4619]: E0126 11:49:39.262326 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:49:54 crc kubenswrapper[4619]: I0126 11:49:54.260740 4619 scope.go:117] "RemoveContainer" containerID="8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d" Jan 26 11:49:55 crc kubenswrapper[4619]: I0126 11:49:55.597683 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerStarted","Data":"bc5d131803bc4df1a53c71187ed03cacec8fabe2fb107f3f19921da11278f5ea"} Jan 26 11:50:40 crc kubenswrapper[4619]: I0126 11:50:40.043221 4619 generic.go:334] "Generic (PLEG): container finished" podID="857314c6-b4dd-4f76-8a06-2bf24b654fe3" containerID="1164105e871ceffce67902b1056b6f1355743734ed19f08198390474cad42ec1" exitCode=0 Jan 26 11:50:40 crc kubenswrapper[4619]: I0126 11:50:40.043290 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"857314c6-b4dd-4f76-8a06-2bf24b654fe3","Type":"ContainerDied","Data":"1164105e871ceffce67902b1056b6f1355743734ed19f08198390474cad42ec1"} Jan 26 11:50:41 crc kubenswrapper[4619]: I0126 11:50:41.404240 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 11:50:41 crc kubenswrapper[4619]: I0126 11:50:41.584048 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/857314c6-b4dd-4f76-8a06-2bf24b654fe3-test-operator-ephemeral-workdir\") pod \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " Jan 26 11:50:41 crc kubenswrapper[4619]: I0126 11:50:41.584189 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " Jan 26 11:50:41 crc kubenswrapper[4619]: I0126 11:50:41.584242 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/857314c6-b4dd-4f76-8a06-2bf24b654fe3-ssh-key\") pod \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " Jan 26 11:50:41 crc kubenswrapper[4619]: I0126 11:50:41.584289 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/857314c6-b4dd-4f76-8a06-2bf24b654fe3-openstack-config\") pod \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " Jan 26 11:50:41 crc kubenswrapper[4619]: I0126 11:50:41.584380 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/857314c6-b4dd-4f76-8a06-2bf24b654fe3-ca-certs\") pod \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " Jan 26 11:50:41 crc kubenswrapper[4619]: I0126 11:50:41.584521 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn4ch\" (UniqueName: \"kubernetes.io/projected/857314c6-b4dd-4f76-8a06-2bf24b654fe3-kube-api-access-xn4ch\") pod \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " Jan 26 11:50:41 crc kubenswrapper[4619]: I0126 11:50:41.585194 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/857314c6-b4dd-4f76-8a06-2bf24b654fe3-openstack-config-secret\") pod \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " Jan 26 11:50:41 crc kubenswrapper[4619]: I0126 11:50:41.585247 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/857314c6-b4dd-4f76-8a06-2bf24b654fe3-test-operator-ephemeral-temporary\") pod \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " Jan 26 11:50:41 crc kubenswrapper[4619]: I0126 11:50:41.585368 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/857314c6-b4dd-4f76-8a06-2bf24b654fe3-config-data\") pod \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\" (UID: \"857314c6-b4dd-4f76-8a06-2bf24b654fe3\") " Jan 26 11:50:41 crc kubenswrapper[4619]: I0126 11:50:41.587201 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/857314c6-b4dd-4f76-8a06-2bf24b654fe3-config-data" (OuterVolumeSpecName: "config-data") pod "857314c6-b4dd-4f76-8a06-2bf24b654fe3" (UID: "857314c6-b4dd-4f76-8a06-2bf24b654fe3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:50:41 crc kubenswrapper[4619]: I0126 11:50:41.587571 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/857314c6-b4dd-4f76-8a06-2bf24b654fe3-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "857314c6-b4dd-4f76-8a06-2bf24b654fe3" (UID: "857314c6-b4dd-4f76-8a06-2bf24b654fe3"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:50:41 crc kubenswrapper[4619]: I0126 11:50:41.588209 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/857314c6-b4dd-4f76-8a06-2bf24b654fe3-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "857314c6-b4dd-4f76-8a06-2bf24b654fe3" (UID: "857314c6-b4dd-4f76-8a06-2bf24b654fe3"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:50:41 crc kubenswrapper[4619]: I0126 11:50:41.590185 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/857314c6-b4dd-4f76-8a06-2bf24b654fe3-kube-api-access-xn4ch" (OuterVolumeSpecName: "kube-api-access-xn4ch") pod "857314c6-b4dd-4f76-8a06-2bf24b654fe3" (UID: "857314c6-b4dd-4f76-8a06-2bf24b654fe3"). InnerVolumeSpecName "kube-api-access-xn4ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:50:41 crc kubenswrapper[4619]: I0126 11:50:41.591165 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "857314c6-b4dd-4f76-8a06-2bf24b654fe3" (UID: "857314c6-b4dd-4f76-8a06-2bf24b654fe3"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 11:50:41 crc kubenswrapper[4619]: I0126 11:50:41.637835 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/857314c6-b4dd-4f76-8a06-2bf24b654fe3-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "857314c6-b4dd-4f76-8a06-2bf24b654fe3" (UID: "857314c6-b4dd-4f76-8a06-2bf24b654fe3"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:50:41 crc kubenswrapper[4619]: I0126 11:50:41.638670 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/857314c6-b4dd-4f76-8a06-2bf24b654fe3-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "857314c6-b4dd-4f76-8a06-2bf24b654fe3" (UID: "857314c6-b4dd-4f76-8a06-2bf24b654fe3"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:50:41 crc kubenswrapper[4619]: I0126 11:50:41.642500 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/857314c6-b4dd-4f76-8a06-2bf24b654fe3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "857314c6-b4dd-4f76-8a06-2bf24b654fe3" (UID: "857314c6-b4dd-4f76-8a06-2bf24b654fe3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:50:41 crc kubenswrapper[4619]: I0126 11:50:41.687172 4619 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/857314c6-b4dd-4f76-8a06-2bf24b654fe3-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:50:41 crc kubenswrapper[4619]: I0126 11:50:41.687208 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xn4ch\" (UniqueName: \"kubernetes.io/projected/857314c6-b4dd-4f76-8a06-2bf24b654fe3-kube-api-access-xn4ch\") on node \"crc\" DevicePath \"\"" Jan 26 11:50:41 crc kubenswrapper[4619]: I0126 11:50:41.687219 4619 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/857314c6-b4dd-4f76-8a06-2bf24b654fe3-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 26 11:50:41 crc kubenswrapper[4619]: I0126 11:50:41.687228 4619 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/857314c6-b4dd-4f76-8a06-2bf24b654fe3-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 26 11:50:41 crc kubenswrapper[4619]: I0126 11:50:41.687237 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/857314c6-b4dd-4f76-8a06-2bf24b654fe3-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:50:41 crc kubenswrapper[4619]: I0126 11:50:41.687246 4619 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/857314c6-b4dd-4f76-8a06-2bf24b654fe3-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 26 11:50:41 crc kubenswrapper[4619]: I0126 11:50:41.687267 4619 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 26 11:50:41 crc kubenswrapper[4619]: I0126 11:50:41.687276 4619 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/857314c6-b4dd-4f76-8a06-2bf24b654fe3-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 26 11:50:41 crc kubenswrapper[4619]: I0126 11:50:41.694195 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/857314c6-b4dd-4f76-8a06-2bf24b654fe3-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "857314c6-b4dd-4f76-8a06-2bf24b654fe3" (UID: "857314c6-b4dd-4f76-8a06-2bf24b654fe3"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:50:41 crc kubenswrapper[4619]: I0126 11:50:41.710000 4619 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 26 11:50:41 crc kubenswrapper[4619]: I0126 11:50:41.788626 4619 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 26 11:50:41 crc kubenswrapper[4619]: I0126 11:50:41.788838 4619 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/857314c6-b4dd-4f76-8a06-2bf24b654fe3-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:50:42 crc kubenswrapper[4619]: I0126 11:50:42.061895 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"857314c6-b4dd-4f76-8a06-2bf24b654fe3","Type":"ContainerDied","Data":"c7b1aebdf79597050a54b3fa150a7735a905de00edbf627de91905d2eac64f0d"} Jan 26 11:50:42 crc kubenswrapper[4619]: I0126 11:50:42.062226 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7b1aebdf79597050a54b3fa150a7735a905de00edbf627de91905d2eac64f0d" Jan 26 11:50:42 crc kubenswrapper[4619]: I0126 11:50:42.061946 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 11:50:50 crc kubenswrapper[4619]: I0126 11:50:50.408796 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 11:50:50 crc kubenswrapper[4619]: E0126 11:50:50.409741 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="857314c6-b4dd-4f76-8a06-2bf24b654fe3" containerName="tempest-tests-tempest-tests-runner" Jan 26 11:50:50 crc kubenswrapper[4619]: I0126 11:50:50.409754 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="857314c6-b4dd-4f76-8a06-2bf24b654fe3" containerName="tempest-tests-tempest-tests-runner" Jan 26 11:50:50 crc kubenswrapper[4619]: E0126 11:50:50.409771 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f23c1ed-eab6-4e9d-8522-483373c72316" containerName="collect-profiles" Jan 26 11:50:50 crc kubenswrapper[4619]: I0126 11:50:50.409777 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f23c1ed-eab6-4e9d-8522-483373c72316" containerName="collect-profiles" Jan 26 11:50:50 crc kubenswrapper[4619]: I0126 11:50:50.409976 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f23c1ed-eab6-4e9d-8522-483373c72316" containerName="collect-profiles" Jan 26 11:50:50 crc kubenswrapper[4619]: I0126 11:50:50.409991 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="857314c6-b4dd-4f76-8a06-2bf24b654fe3" containerName="tempest-tests-tempest-tests-runner" Jan 26 11:50:50 crc kubenswrapper[4619]: I0126 11:50:50.410594 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 11:50:50 crc kubenswrapper[4619]: I0126 11:50:50.414656 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-gwxgg" Jan 26 11:50:50 crc kubenswrapper[4619]: I0126 11:50:50.433758 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 11:50:50 crc kubenswrapper[4619]: I0126 11:50:50.597707 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"06dbbb97-7e72-4105-bc6c-275ca6b8c3ee\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 11:50:50 crc kubenswrapper[4619]: I0126 11:50:50.597898 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkzkl\" (UniqueName: \"kubernetes.io/projected/06dbbb97-7e72-4105-bc6c-275ca6b8c3ee-kube-api-access-kkzkl\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"06dbbb97-7e72-4105-bc6c-275ca6b8c3ee\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 11:50:50 crc kubenswrapper[4619]: I0126 11:50:50.700195 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"06dbbb97-7e72-4105-bc6c-275ca6b8c3ee\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 11:50:50 crc kubenswrapper[4619]: I0126 11:50:50.700339 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkzkl\" (UniqueName: \"kubernetes.io/projected/06dbbb97-7e72-4105-bc6c-275ca6b8c3ee-kube-api-access-kkzkl\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"06dbbb97-7e72-4105-bc6c-275ca6b8c3ee\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 11:50:50 crc kubenswrapper[4619]: I0126 11:50:50.700657 4619 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"06dbbb97-7e72-4105-bc6c-275ca6b8c3ee\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 11:50:50 crc kubenswrapper[4619]: I0126 11:50:50.738700 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkzkl\" (UniqueName: \"kubernetes.io/projected/06dbbb97-7e72-4105-bc6c-275ca6b8c3ee-kube-api-access-kkzkl\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"06dbbb97-7e72-4105-bc6c-275ca6b8c3ee\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 11:50:50 crc kubenswrapper[4619]: I0126 11:50:50.740180 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"06dbbb97-7e72-4105-bc6c-275ca6b8c3ee\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 11:50:50 crc kubenswrapper[4619]: I0126 11:50:50.746552 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 11:50:51 crc kubenswrapper[4619]: I0126 11:50:51.246662 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 11:50:51 crc kubenswrapper[4619]: I0126 11:50:51.259107 4619 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 11:50:52 crc kubenswrapper[4619]: I0126 11:50:52.159826 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"06dbbb97-7e72-4105-bc6c-275ca6b8c3ee","Type":"ContainerStarted","Data":"501c4eb456de9d803aef88aadfc63dd79b6173f814772bec7688f2c5109fe759"} Jan 26 11:50:53 crc kubenswrapper[4619]: I0126 11:50:53.174644 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"06dbbb97-7e72-4105-bc6c-275ca6b8c3ee","Type":"ContainerStarted","Data":"0a80442ebfb250c04a3f92028a552ccdcceb00bea1023602ce616bb9b52052d0"} Jan 26 11:50:53 crc kubenswrapper[4619]: I0126 11:50:53.194990 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.841334898 podStartE2EDuration="3.194969704s" podCreationTimestamp="2026-01-26 11:50:50 +0000 UTC" firstStartedPulling="2026-01-26 11:50:51.25883542 +0000 UTC m=+3350.292876136" lastFinishedPulling="2026-01-26 11:50:52.612470226 +0000 UTC m=+3351.646510942" observedRunningTime="2026-01-26 11:50:53.184082818 +0000 UTC m=+3352.218123534" watchObservedRunningTime="2026-01-26 11:50:53.194969704 +0000 UTC m=+3352.229010420" Jan 26 11:51:12 crc kubenswrapper[4619]: I0126 11:51:12.333548 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5whm8"] Jan 26 11:51:12 crc kubenswrapper[4619]: I0126 11:51:12.336761 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5whm8" Jan 26 11:51:12 crc kubenswrapper[4619]: I0126 11:51:12.362472 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5whm8"] Jan 26 11:51:12 crc kubenswrapper[4619]: I0126 11:51:12.442609 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbxn9\" (UniqueName: \"kubernetes.io/projected/ce94576f-5138-43af-8fbb-c5d22287bf35-kube-api-access-kbxn9\") pod \"community-operators-5whm8\" (UID: \"ce94576f-5138-43af-8fbb-c5d22287bf35\") " pod="openshift-marketplace/community-operators-5whm8" Jan 26 11:51:12 crc kubenswrapper[4619]: I0126 11:51:12.442934 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce94576f-5138-43af-8fbb-c5d22287bf35-utilities\") pod \"community-operators-5whm8\" (UID: \"ce94576f-5138-43af-8fbb-c5d22287bf35\") " pod="openshift-marketplace/community-operators-5whm8" Jan 26 11:51:12 crc kubenswrapper[4619]: I0126 11:51:12.442997 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce94576f-5138-43af-8fbb-c5d22287bf35-catalog-content\") pod \"community-operators-5whm8\" (UID: \"ce94576f-5138-43af-8fbb-c5d22287bf35\") " pod="openshift-marketplace/community-operators-5whm8" Jan 26 11:51:12 crc kubenswrapper[4619]: I0126 11:51:12.544418 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbxn9\" (UniqueName: \"kubernetes.io/projected/ce94576f-5138-43af-8fbb-c5d22287bf35-kube-api-access-kbxn9\") pod \"community-operators-5whm8\" (UID: \"ce94576f-5138-43af-8fbb-c5d22287bf35\") " pod="openshift-marketplace/community-operators-5whm8" Jan 26 11:51:12 crc kubenswrapper[4619]: I0126 11:51:12.544520 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce94576f-5138-43af-8fbb-c5d22287bf35-utilities\") pod \"community-operators-5whm8\" (UID: \"ce94576f-5138-43af-8fbb-c5d22287bf35\") " pod="openshift-marketplace/community-operators-5whm8" Jan 26 11:51:12 crc kubenswrapper[4619]: I0126 11:51:12.544548 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce94576f-5138-43af-8fbb-c5d22287bf35-catalog-content\") pod \"community-operators-5whm8\" (UID: \"ce94576f-5138-43af-8fbb-c5d22287bf35\") " pod="openshift-marketplace/community-operators-5whm8" Jan 26 11:51:12 crc kubenswrapper[4619]: I0126 11:51:12.544965 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce94576f-5138-43af-8fbb-c5d22287bf35-catalog-content\") pod \"community-operators-5whm8\" (UID: \"ce94576f-5138-43af-8fbb-c5d22287bf35\") " pod="openshift-marketplace/community-operators-5whm8" Jan 26 11:51:12 crc kubenswrapper[4619]: I0126 11:51:12.545252 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce94576f-5138-43af-8fbb-c5d22287bf35-utilities\") pod \"community-operators-5whm8\" (UID: \"ce94576f-5138-43af-8fbb-c5d22287bf35\") " pod="openshift-marketplace/community-operators-5whm8" Jan 26 11:51:12 crc kubenswrapper[4619]: I0126 11:51:12.567461 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbxn9\" (UniqueName: \"kubernetes.io/projected/ce94576f-5138-43af-8fbb-c5d22287bf35-kube-api-access-kbxn9\") pod \"community-operators-5whm8\" (UID: \"ce94576f-5138-43af-8fbb-c5d22287bf35\") " pod="openshift-marketplace/community-operators-5whm8" Jan 26 11:51:12 crc kubenswrapper[4619]: I0126 11:51:12.664891 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5whm8" Jan 26 11:51:13 crc kubenswrapper[4619]: I0126 11:51:13.228778 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5whm8"] Jan 26 11:51:13 crc kubenswrapper[4619]: I0126 11:51:13.381291 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5whm8" event={"ID":"ce94576f-5138-43af-8fbb-c5d22287bf35","Type":"ContainerStarted","Data":"08979d5e72853a726e67bd5f824c473b4611ed4c674b4e8a779a550619e9cb51"} Jan 26 11:51:14 crc kubenswrapper[4619]: I0126 11:51:14.393415 4619 generic.go:334] "Generic (PLEG): container finished" podID="ce94576f-5138-43af-8fbb-c5d22287bf35" containerID="c5c7d257dcc3a530053879f3a810e71aa18f5e3910b44cf698d7d199e6563c4d" exitCode=0 Jan 26 11:51:14 crc kubenswrapper[4619]: I0126 11:51:14.393458 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5whm8" event={"ID":"ce94576f-5138-43af-8fbb-c5d22287bf35","Type":"ContainerDied","Data":"c5c7d257dcc3a530053879f3a810e71aa18f5e3910b44cf698d7d199e6563c4d"} Jan 26 11:51:14 crc kubenswrapper[4619]: I0126 11:51:14.920653 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cv5tk/must-gather-s8kkt"] Jan 26 11:51:14 crc kubenswrapper[4619]: I0126 11:51:14.931731 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cv5tk/must-gather-s8kkt" Jan 26 11:51:14 crc kubenswrapper[4619]: I0126 11:51:14.933762 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-cv5tk"/"default-dockercfg-sfxt5" Jan 26 11:51:14 crc kubenswrapper[4619]: I0126 11:51:14.933943 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cv5tk"/"kube-root-ca.crt" Jan 26 11:51:14 crc kubenswrapper[4619]: I0126 11:51:14.934654 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cv5tk"/"openshift-service-ca.crt" Jan 26 11:51:14 crc kubenswrapper[4619]: I0126 11:51:14.936287 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cv5tk/must-gather-s8kkt"] Jan 26 11:51:15 crc kubenswrapper[4619]: I0126 11:51:15.091049 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56faf884-e166-465d-89d7-f5e3c60acd5e-must-gather-output\") pod \"must-gather-s8kkt\" (UID: \"56faf884-e166-465d-89d7-f5e3c60acd5e\") " pod="openshift-must-gather-cv5tk/must-gather-s8kkt" Jan 26 11:51:15 crc kubenswrapper[4619]: I0126 11:51:15.091204 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92b9j\" (UniqueName: \"kubernetes.io/projected/56faf884-e166-465d-89d7-f5e3c60acd5e-kube-api-access-92b9j\") pod \"must-gather-s8kkt\" (UID: \"56faf884-e166-465d-89d7-f5e3c60acd5e\") " pod="openshift-must-gather-cv5tk/must-gather-s8kkt" Jan 26 11:51:15 crc kubenswrapper[4619]: I0126 11:51:15.193182 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56faf884-e166-465d-89d7-f5e3c60acd5e-must-gather-output\") pod \"must-gather-s8kkt\" (UID: \"56faf884-e166-465d-89d7-f5e3c60acd5e\") " pod="openshift-must-gather-cv5tk/must-gather-s8kkt" Jan 26 11:51:15 crc kubenswrapper[4619]: I0126 11:51:15.193242 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92b9j\" (UniqueName: \"kubernetes.io/projected/56faf884-e166-465d-89d7-f5e3c60acd5e-kube-api-access-92b9j\") pod \"must-gather-s8kkt\" (UID: \"56faf884-e166-465d-89d7-f5e3c60acd5e\") " pod="openshift-must-gather-cv5tk/must-gather-s8kkt" Jan 26 11:51:15 crc kubenswrapper[4619]: I0126 11:51:15.193688 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56faf884-e166-465d-89d7-f5e3c60acd5e-must-gather-output\") pod \"must-gather-s8kkt\" (UID: \"56faf884-e166-465d-89d7-f5e3c60acd5e\") " pod="openshift-must-gather-cv5tk/must-gather-s8kkt" Jan 26 11:51:15 crc kubenswrapper[4619]: I0126 11:51:15.224494 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92b9j\" (UniqueName: \"kubernetes.io/projected/56faf884-e166-465d-89d7-f5e3c60acd5e-kube-api-access-92b9j\") pod \"must-gather-s8kkt\" (UID: \"56faf884-e166-465d-89d7-f5e3c60acd5e\") " pod="openshift-must-gather-cv5tk/must-gather-s8kkt" Jan 26 11:51:15 crc kubenswrapper[4619]: I0126 11:51:15.253216 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cv5tk/must-gather-s8kkt" Jan 26 11:51:15 crc kubenswrapper[4619]: I0126 11:51:15.413889 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5whm8" event={"ID":"ce94576f-5138-43af-8fbb-c5d22287bf35","Type":"ContainerStarted","Data":"e4a6f3173edae28b435cd0cd02c305df7b309fc6ef08f88014ad8bcfa380b4dd"} Jan 26 11:51:15 crc kubenswrapper[4619]: I0126 11:51:15.788823 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cv5tk/must-gather-s8kkt"] Jan 26 11:51:16 crc kubenswrapper[4619]: I0126 11:51:16.442809 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cv5tk/must-gather-s8kkt" event={"ID":"56faf884-e166-465d-89d7-f5e3c60acd5e","Type":"ContainerStarted","Data":"837c6b0b385e407ca05cc2b424558f0932a00bce734ab17818e6c8e3d25439ff"} Jan 26 11:51:16 crc kubenswrapper[4619]: I0126 11:51:16.454009 4619 generic.go:334] "Generic (PLEG): container finished" podID="ce94576f-5138-43af-8fbb-c5d22287bf35" containerID="e4a6f3173edae28b435cd0cd02c305df7b309fc6ef08f88014ad8bcfa380b4dd" exitCode=0 Jan 26 11:51:16 crc kubenswrapper[4619]: I0126 11:51:16.454073 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5whm8" event={"ID":"ce94576f-5138-43af-8fbb-c5d22287bf35","Type":"ContainerDied","Data":"e4a6f3173edae28b435cd0cd02c305df7b309fc6ef08f88014ad8bcfa380b4dd"} Jan 26 11:51:17 crc kubenswrapper[4619]: I0126 11:51:17.469533 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5whm8" event={"ID":"ce94576f-5138-43af-8fbb-c5d22287bf35","Type":"ContainerStarted","Data":"75c99fa63cc1761a3e408c029d6534105191d38a7cfeebe84432a283a07e6193"} Jan 26 11:51:17 crc kubenswrapper[4619]: I0126 11:51:17.497359 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5whm8" podStartSLOduration=2.966707289 podStartE2EDuration="5.497341987s" podCreationTimestamp="2026-01-26 11:51:12 +0000 UTC" firstStartedPulling="2026-01-26 11:51:14.397369298 +0000 UTC m=+3373.431410014" lastFinishedPulling="2026-01-26 11:51:16.928003996 +0000 UTC m=+3375.962044712" observedRunningTime="2026-01-26 11:51:17.495769335 +0000 UTC m=+3376.529810051" watchObservedRunningTime="2026-01-26 11:51:17.497341987 +0000 UTC m=+3376.531382703" Jan 26 11:51:22 crc kubenswrapper[4619]: I0126 11:51:22.665844 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5whm8" Jan 26 11:51:22 crc kubenswrapper[4619]: I0126 11:51:22.666218 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5whm8" Jan 26 11:51:22 crc kubenswrapper[4619]: I0126 11:51:22.720265 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5whm8" Jan 26 11:51:23 crc kubenswrapper[4619]: I0126 11:51:23.573432 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5whm8" Jan 26 11:51:23 crc kubenswrapper[4619]: I0126 11:51:23.617902 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5whm8"] Jan 26 11:51:25 crc kubenswrapper[4619]: I0126 11:51:25.540187 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5whm8" podUID="ce94576f-5138-43af-8fbb-c5d22287bf35" containerName="registry-server" containerID="cri-o://75c99fa63cc1761a3e408c029d6534105191d38a7cfeebe84432a283a07e6193" gracePeriod=2 Jan 26 11:51:26 crc kubenswrapper[4619]: I0126 11:51:26.111830 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5whm8" Jan 26 11:51:26 crc kubenswrapper[4619]: I0126 11:51:26.248715 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbxn9\" (UniqueName: \"kubernetes.io/projected/ce94576f-5138-43af-8fbb-c5d22287bf35-kube-api-access-kbxn9\") pod \"ce94576f-5138-43af-8fbb-c5d22287bf35\" (UID: \"ce94576f-5138-43af-8fbb-c5d22287bf35\") " Jan 26 11:51:26 crc kubenswrapper[4619]: I0126 11:51:26.248863 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce94576f-5138-43af-8fbb-c5d22287bf35-catalog-content\") pod \"ce94576f-5138-43af-8fbb-c5d22287bf35\" (UID: \"ce94576f-5138-43af-8fbb-c5d22287bf35\") " Jan 26 11:51:26 crc kubenswrapper[4619]: I0126 11:51:26.248945 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce94576f-5138-43af-8fbb-c5d22287bf35-utilities\") pod \"ce94576f-5138-43af-8fbb-c5d22287bf35\" (UID: \"ce94576f-5138-43af-8fbb-c5d22287bf35\") " Jan 26 11:51:26 crc kubenswrapper[4619]: I0126 11:51:26.249744 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce94576f-5138-43af-8fbb-c5d22287bf35-utilities" (OuterVolumeSpecName: "utilities") pod "ce94576f-5138-43af-8fbb-c5d22287bf35" (UID: "ce94576f-5138-43af-8fbb-c5d22287bf35"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:51:26 crc kubenswrapper[4619]: I0126 11:51:26.250137 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce94576f-5138-43af-8fbb-c5d22287bf35-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:51:26 crc kubenswrapper[4619]: I0126 11:51:26.256706 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce94576f-5138-43af-8fbb-c5d22287bf35-kube-api-access-kbxn9" (OuterVolumeSpecName: "kube-api-access-kbxn9") pod "ce94576f-5138-43af-8fbb-c5d22287bf35" (UID: "ce94576f-5138-43af-8fbb-c5d22287bf35"). InnerVolumeSpecName "kube-api-access-kbxn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:51:26 crc kubenswrapper[4619]: I0126 11:51:26.306548 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce94576f-5138-43af-8fbb-c5d22287bf35-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce94576f-5138-43af-8fbb-c5d22287bf35" (UID: "ce94576f-5138-43af-8fbb-c5d22287bf35"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:51:26 crc kubenswrapper[4619]: I0126 11:51:26.351730 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbxn9\" (UniqueName: \"kubernetes.io/projected/ce94576f-5138-43af-8fbb-c5d22287bf35-kube-api-access-kbxn9\") on node \"crc\" DevicePath \"\"" Jan 26 11:51:26 crc kubenswrapper[4619]: I0126 11:51:26.351768 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce94576f-5138-43af-8fbb-c5d22287bf35-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:51:26 crc kubenswrapper[4619]: I0126 11:51:26.553454 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cv5tk/must-gather-s8kkt" event={"ID":"56faf884-e166-465d-89d7-f5e3c60acd5e","Type":"ContainerStarted","Data":"3cac621e4c0cb37dd0fd2e47f067f6a1aaf8cb472349fc52f28711f6bb752c10"} Jan 26 11:51:26 crc kubenswrapper[4619]: I0126 11:51:26.553510 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cv5tk/must-gather-s8kkt" event={"ID":"56faf884-e166-465d-89d7-f5e3c60acd5e","Type":"ContainerStarted","Data":"aad02af9f1a3838b557b6dc08253644da584fcc8f1422871f2d7a08cd9f2ee5a"} Jan 26 11:51:26 crc kubenswrapper[4619]: I0126 11:51:26.557902 4619 generic.go:334] "Generic (PLEG): container finished" podID="ce94576f-5138-43af-8fbb-c5d22287bf35" containerID="75c99fa63cc1761a3e408c029d6534105191d38a7cfeebe84432a283a07e6193" exitCode=0 Jan 26 11:51:26 crc kubenswrapper[4619]: I0126 11:51:26.557992 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5whm8" Jan 26 11:51:26 crc kubenswrapper[4619]: I0126 11:51:26.558015 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5whm8" event={"ID":"ce94576f-5138-43af-8fbb-c5d22287bf35","Type":"ContainerDied","Data":"75c99fa63cc1761a3e408c029d6534105191d38a7cfeebe84432a283a07e6193"} Jan 26 11:51:26 crc kubenswrapper[4619]: I0126 11:51:26.558473 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5whm8" event={"ID":"ce94576f-5138-43af-8fbb-c5d22287bf35","Type":"ContainerDied","Data":"08979d5e72853a726e67bd5f824c473b4611ed4c674b4e8a779a550619e9cb51"} Jan 26 11:51:26 crc kubenswrapper[4619]: I0126 11:51:26.558502 4619 scope.go:117] "RemoveContainer" containerID="75c99fa63cc1761a3e408c029d6534105191d38a7cfeebe84432a283a07e6193" Jan 26 11:51:26 crc kubenswrapper[4619]: I0126 11:51:26.574198 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-cv5tk/must-gather-s8kkt" podStartSLOduration=2.648136967 podStartE2EDuration="12.5741692s" podCreationTimestamp="2026-01-26 11:51:14 +0000 UTC" firstStartedPulling="2026-01-26 11:51:15.801437725 +0000 UTC m=+3374.835478441" lastFinishedPulling="2026-01-26 11:51:25.727469968 +0000 UTC m=+3384.761510674" observedRunningTime="2026-01-26 11:51:26.573520492 +0000 UTC m=+3385.607561208" watchObservedRunningTime="2026-01-26 11:51:26.5741692 +0000 UTC m=+3385.608209936" Jan 26 11:51:26 crc kubenswrapper[4619]: I0126 11:51:26.610926 4619 scope.go:117] "RemoveContainer" containerID="e4a6f3173edae28b435cd0cd02c305df7b309fc6ef08f88014ad8bcfa380b4dd" Jan 26 11:51:26 crc kubenswrapper[4619]: I0126 11:51:26.611343 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5whm8"] Jan 26 11:51:26 crc kubenswrapper[4619]: I0126 11:51:26.620209 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5whm8"] Jan 26 11:51:26 crc kubenswrapper[4619]: I0126 11:51:26.633090 4619 scope.go:117] "RemoveContainer" containerID="c5c7d257dcc3a530053879f3a810e71aa18f5e3910b44cf698d7d199e6563c4d" Jan 26 11:51:26 crc kubenswrapper[4619]: I0126 11:51:26.656327 4619 scope.go:117] "RemoveContainer" containerID="75c99fa63cc1761a3e408c029d6534105191d38a7cfeebe84432a283a07e6193" Jan 26 11:51:26 crc kubenswrapper[4619]: E0126 11:51:26.656810 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75c99fa63cc1761a3e408c029d6534105191d38a7cfeebe84432a283a07e6193\": container with ID starting with 75c99fa63cc1761a3e408c029d6534105191d38a7cfeebe84432a283a07e6193 not found: ID does not exist" containerID="75c99fa63cc1761a3e408c029d6534105191d38a7cfeebe84432a283a07e6193" Jan 26 11:51:26 crc kubenswrapper[4619]: I0126 11:51:26.656844 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75c99fa63cc1761a3e408c029d6534105191d38a7cfeebe84432a283a07e6193"} err="failed to get container status \"75c99fa63cc1761a3e408c029d6534105191d38a7cfeebe84432a283a07e6193\": rpc error: code = NotFound desc = could not find container \"75c99fa63cc1761a3e408c029d6534105191d38a7cfeebe84432a283a07e6193\": container with ID starting with 75c99fa63cc1761a3e408c029d6534105191d38a7cfeebe84432a283a07e6193 not found: ID does not exist" Jan 26 11:51:26 crc kubenswrapper[4619]: I0126 11:51:26.656865 4619 scope.go:117] "RemoveContainer" containerID="e4a6f3173edae28b435cd0cd02c305df7b309fc6ef08f88014ad8bcfa380b4dd" Jan 26 11:51:26 crc kubenswrapper[4619]: E0126 11:51:26.657251 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4a6f3173edae28b435cd0cd02c305df7b309fc6ef08f88014ad8bcfa380b4dd\": container with ID starting with e4a6f3173edae28b435cd0cd02c305df7b309fc6ef08f88014ad8bcfa380b4dd not found: ID does not exist" containerID="e4a6f3173edae28b435cd0cd02c305df7b309fc6ef08f88014ad8bcfa380b4dd" Jan 26 11:51:26 crc kubenswrapper[4619]: I0126 11:51:26.657298 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4a6f3173edae28b435cd0cd02c305df7b309fc6ef08f88014ad8bcfa380b4dd"} err="failed to get container status \"e4a6f3173edae28b435cd0cd02c305df7b309fc6ef08f88014ad8bcfa380b4dd\": rpc error: code = NotFound desc = could not find container \"e4a6f3173edae28b435cd0cd02c305df7b309fc6ef08f88014ad8bcfa380b4dd\": container with ID starting with e4a6f3173edae28b435cd0cd02c305df7b309fc6ef08f88014ad8bcfa380b4dd not found: ID does not exist" Jan 26 11:51:26 crc kubenswrapper[4619]: I0126 11:51:26.657331 4619 scope.go:117] "RemoveContainer" containerID="c5c7d257dcc3a530053879f3a810e71aa18f5e3910b44cf698d7d199e6563c4d" Jan 26 11:51:26 crc kubenswrapper[4619]: E0126 11:51:26.657721 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5c7d257dcc3a530053879f3a810e71aa18f5e3910b44cf698d7d199e6563c4d\": container with ID starting with c5c7d257dcc3a530053879f3a810e71aa18f5e3910b44cf698d7d199e6563c4d not found: ID does not exist" containerID="c5c7d257dcc3a530053879f3a810e71aa18f5e3910b44cf698d7d199e6563c4d" Jan 26 11:51:26 crc kubenswrapper[4619]: I0126 11:51:26.657748 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5c7d257dcc3a530053879f3a810e71aa18f5e3910b44cf698d7d199e6563c4d"} err="failed to get container status \"c5c7d257dcc3a530053879f3a810e71aa18f5e3910b44cf698d7d199e6563c4d\": rpc error: code = NotFound desc = could not find container \"c5c7d257dcc3a530053879f3a810e71aa18f5e3910b44cf698d7d199e6563c4d\": container with ID starting with c5c7d257dcc3a530053879f3a810e71aa18f5e3910b44cf698d7d199e6563c4d not found: ID does not exist" Jan 26 11:51:27 crc kubenswrapper[4619]: I0126 11:51:27.271884 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce94576f-5138-43af-8fbb-c5d22287bf35" path="/var/lib/kubelet/pods/ce94576f-5138-43af-8fbb-c5d22287bf35/volumes" Jan 26 11:51:29 crc kubenswrapper[4619]: I0126 11:51:29.747821 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cv5tk/crc-debug-hx25k"] Jan 26 11:51:29 crc kubenswrapper[4619]: E0126 11:51:29.748722 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce94576f-5138-43af-8fbb-c5d22287bf35" containerName="extract-content" Jan 26 11:51:29 crc kubenswrapper[4619]: I0126 11:51:29.748738 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce94576f-5138-43af-8fbb-c5d22287bf35" containerName="extract-content" Jan 26 11:51:29 crc kubenswrapper[4619]: E0126 11:51:29.748757 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce94576f-5138-43af-8fbb-c5d22287bf35" containerName="extract-utilities" Jan 26 11:51:29 crc kubenswrapper[4619]: I0126 11:51:29.748766 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce94576f-5138-43af-8fbb-c5d22287bf35" containerName="extract-utilities" Jan 26 11:51:29 crc kubenswrapper[4619]: E0126 11:51:29.748786 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce94576f-5138-43af-8fbb-c5d22287bf35" containerName="registry-server" Jan 26 11:51:29 crc kubenswrapper[4619]: I0126 11:51:29.748796 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce94576f-5138-43af-8fbb-c5d22287bf35" containerName="registry-server" Jan 26 11:51:29 crc kubenswrapper[4619]: I0126 11:51:29.748983 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce94576f-5138-43af-8fbb-c5d22287bf35" containerName="registry-server" Jan 26 11:51:29 crc kubenswrapper[4619]: I0126 11:51:29.749564 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cv5tk/crc-debug-hx25k" Jan 26 11:51:29 crc kubenswrapper[4619]: I0126 11:51:29.815936 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9-host\") pod \"crc-debug-hx25k\" (UID: \"3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9\") " pod="openshift-must-gather-cv5tk/crc-debug-hx25k" Jan 26 11:51:29 crc kubenswrapper[4619]: I0126 11:51:29.816087 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbx65\" (UniqueName: \"kubernetes.io/projected/3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9-kube-api-access-nbx65\") pod \"crc-debug-hx25k\" (UID: \"3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9\") " pod="openshift-must-gather-cv5tk/crc-debug-hx25k" Jan 26 11:51:29 crc kubenswrapper[4619]: I0126 11:51:29.921872 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbx65\" (UniqueName: \"kubernetes.io/projected/3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9-kube-api-access-nbx65\") pod \"crc-debug-hx25k\" (UID: \"3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9\") " pod="openshift-must-gather-cv5tk/crc-debug-hx25k" Jan 26 11:51:29 crc kubenswrapper[4619]: I0126 11:51:29.922271 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9-host\") pod \"crc-debug-hx25k\" (UID: \"3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9\") " pod="openshift-must-gather-cv5tk/crc-debug-hx25k" Jan 26 11:51:29 crc kubenswrapper[4619]: I0126 11:51:29.922517 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9-host\") pod \"crc-debug-hx25k\" (UID: \"3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9\") " pod="openshift-must-gather-cv5tk/crc-debug-hx25k" Jan 26 11:51:29 crc kubenswrapper[4619]: I0126 11:51:29.949304 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbx65\" (UniqueName: \"kubernetes.io/projected/3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9-kube-api-access-nbx65\") pod \"crc-debug-hx25k\" (UID: \"3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9\") " pod="openshift-must-gather-cv5tk/crc-debug-hx25k" Jan 26 11:51:30 crc kubenswrapper[4619]: I0126 11:51:30.066096 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cv5tk/crc-debug-hx25k" Jan 26 11:51:30 crc kubenswrapper[4619]: I0126 11:51:30.593860 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cv5tk/crc-debug-hx25k" event={"ID":"3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9","Type":"ContainerStarted","Data":"cc7639e01980dbe52e9f2345b21a8f8cfa676cc1e00982fa6fd69e1066566ed4"} Jan 26 11:51:41 crc kubenswrapper[4619]: I0126 11:51:41.714969 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cv5tk/crc-debug-hx25k" event={"ID":"3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9","Type":"ContainerStarted","Data":"9533b297f10d12b67b6c836734389e94d9e0944250d751e3dbb1c54aa31a3f13"} Jan 26 11:51:41 crc kubenswrapper[4619]: I0126 11:51:41.732938 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-cv5tk/crc-debug-hx25k" podStartSLOduration=1.498626617 podStartE2EDuration="12.732917963s" podCreationTimestamp="2026-01-26 11:51:29 +0000 UTC" firstStartedPulling="2026-01-26 11:51:30.107095952 +0000 UTC m=+3389.141136688" lastFinishedPulling="2026-01-26 11:51:41.341387308 +0000 UTC m=+3400.375428034" observedRunningTime="2026-01-26 11:51:41.729690286 +0000 UTC m=+3400.763730992" watchObservedRunningTime="2026-01-26 11:51:41.732917963 +0000 UTC m=+3400.766958679" Jan 26 11:51:52 crc kubenswrapper[4619]: I0126 11:51:52.715787 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rtxmp"] Jan 26 11:51:52 crc kubenswrapper[4619]: I0126 11:51:52.720459 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtxmp" Jan 26 11:51:52 crc kubenswrapper[4619]: I0126 11:51:52.726138 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtxmp"] Jan 26 11:51:52 crc kubenswrapper[4619]: I0126 11:51:52.889034 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7qcw\" (UniqueName: \"kubernetes.io/projected/08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9-kube-api-access-g7qcw\") pod \"redhat-marketplace-rtxmp\" (UID: \"08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9\") " pod="openshift-marketplace/redhat-marketplace-rtxmp" Jan 26 11:51:52 crc kubenswrapper[4619]: I0126 11:51:52.889401 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9-catalog-content\") pod \"redhat-marketplace-rtxmp\" (UID: \"08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9\") " pod="openshift-marketplace/redhat-marketplace-rtxmp" Jan 26 11:51:52 crc kubenswrapper[4619]: I0126 11:51:52.889513 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9-utilities\") pod \"redhat-marketplace-rtxmp\" (UID: \"08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9\") " pod="openshift-marketplace/redhat-marketplace-rtxmp" Jan 26 11:51:52 crc kubenswrapper[4619]: I0126 11:51:52.991847 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7qcw\" (UniqueName: \"kubernetes.io/projected/08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9-kube-api-access-g7qcw\") pod \"redhat-marketplace-rtxmp\" (UID: \"08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9\") " pod="openshift-marketplace/redhat-marketplace-rtxmp" Jan 26 11:51:52 crc kubenswrapper[4619]: I0126 11:51:52.992513 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9-catalog-content\") pod \"redhat-marketplace-rtxmp\" (UID: \"08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9\") " pod="openshift-marketplace/redhat-marketplace-rtxmp" Jan 26 11:51:52 crc kubenswrapper[4619]: I0126 11:51:52.993179 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9-utilities\") pod \"redhat-marketplace-rtxmp\" (UID: \"08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9\") " pod="openshift-marketplace/redhat-marketplace-rtxmp" Jan 26 11:51:52 crc kubenswrapper[4619]: I0126 11:51:52.993028 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9-catalog-content\") pod \"redhat-marketplace-rtxmp\" (UID: \"08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9\") " pod="openshift-marketplace/redhat-marketplace-rtxmp" Jan 26 11:51:52 crc kubenswrapper[4619]: I0126 11:51:52.993526 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9-utilities\") pod \"redhat-marketplace-rtxmp\" (UID: \"08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9\") " pod="openshift-marketplace/redhat-marketplace-rtxmp" Jan 26 11:51:53 crc kubenswrapper[4619]: I0126 11:51:53.018355 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7qcw\" (UniqueName: \"kubernetes.io/projected/08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9-kube-api-access-g7qcw\") pod \"redhat-marketplace-rtxmp\" (UID: \"08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9\") " pod="openshift-marketplace/redhat-marketplace-rtxmp" Jan 26 11:51:53 crc kubenswrapper[4619]: I0126 11:51:53.045848 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtxmp" Jan 26 11:51:55 crc kubenswrapper[4619]: I0126 11:51:55.167655 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtxmp"] Jan 26 11:51:55 crc kubenswrapper[4619]: I0126 11:51:55.837959 4619 generic.go:334] "Generic (PLEG): container finished" podID="08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9" containerID="10e641f67a48e5cf02270accee232f37f7cb8cbd161c344cd3120fd568ce5d4b" exitCode=0 Jan 26 11:51:55 crc kubenswrapper[4619]: I0126 11:51:55.838047 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtxmp" event={"ID":"08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9","Type":"ContainerDied","Data":"10e641f67a48e5cf02270accee232f37f7cb8cbd161c344cd3120fd568ce5d4b"} Jan 26 11:51:55 crc kubenswrapper[4619]: I0126 11:51:55.838283 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtxmp" event={"ID":"08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9","Type":"ContainerStarted","Data":"5b9556a9abc03abd0c11abc54fc1c98c1bf552fcf21ba536e3fe510aaef3b4ed"} Jan 26 11:51:56 crc kubenswrapper[4619]: I0126 11:51:56.852447 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtxmp" event={"ID":"08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9","Type":"ContainerStarted","Data":"65398f8ff3c675582bd13f9648aafeb7ffa0351c31d3b9b79e274c045cef7446"} Jan 26 11:51:57 crc kubenswrapper[4619]: I0126 11:51:57.863147 4619 generic.go:334] "Generic (PLEG): container finished" podID="08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9" containerID="65398f8ff3c675582bd13f9648aafeb7ffa0351c31d3b9b79e274c045cef7446" exitCode=0 Jan 26 11:51:57 crc kubenswrapper[4619]: I0126 11:51:57.863225 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtxmp" event={"ID":"08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9","Type":"ContainerDied","Data":"65398f8ff3c675582bd13f9648aafeb7ffa0351c31d3b9b79e274c045cef7446"} Jan 26 11:51:58 crc kubenswrapper[4619]: I0126 11:51:58.874935 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtxmp" event={"ID":"08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9","Type":"ContainerStarted","Data":"777a5241d76a5ef63806e39c97083a15a1b6684866ea7515d03e2fd1b9ba1ef8"} Jan 26 11:51:58 crc kubenswrapper[4619]: I0126 11:51:58.903737 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rtxmp" podStartSLOduration=4.183513982 podStartE2EDuration="6.903717865s" podCreationTimestamp="2026-01-26 11:51:52 +0000 UTC" firstStartedPulling="2026-01-26 11:51:55.840664549 +0000 UTC m=+3414.874705265" lastFinishedPulling="2026-01-26 11:51:58.560868432 +0000 UTC m=+3417.594909148" observedRunningTime="2026-01-26 11:51:58.895652105 +0000 UTC m=+3417.929692841" watchObservedRunningTime="2026-01-26 11:51:58.903717865 +0000 UTC m=+3417.937758591" Jan 26 11:52:03 crc kubenswrapper[4619]: I0126 11:52:03.046081 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rtxmp" Jan 26 11:52:03 crc kubenswrapper[4619]: I0126 11:52:03.047408 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rtxmp" Jan 26 11:52:03 crc kubenswrapper[4619]: I0126 11:52:03.105937 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rtxmp" Jan 26 11:52:03 crc kubenswrapper[4619]: I0126 11:52:03.963809 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rtxmp" Jan 26 11:52:04 crc kubenswrapper[4619]: I0126 11:52:04.012085 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtxmp"] Jan 26 11:52:05 crc kubenswrapper[4619]: I0126 11:52:05.935023 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rtxmp" podUID="08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9" containerName="registry-server" containerID="cri-o://777a5241d76a5ef63806e39c97083a15a1b6684866ea7515d03e2fd1b9ba1ef8" gracePeriod=2 Jan 26 11:52:06 crc kubenswrapper[4619]: I0126 11:52:06.392867 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtxmp" Jan 26 11:52:06 crc kubenswrapper[4619]: I0126 11:52:06.540338 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9-utilities\") pod \"08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9\" (UID: \"08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9\") " Jan 26 11:52:06 crc kubenswrapper[4619]: I0126 11:52:06.540445 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9-catalog-content\") pod \"08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9\" (UID: \"08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9\") " Jan 26 11:52:06 crc kubenswrapper[4619]: I0126 11:52:06.540539 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7qcw\" (UniqueName: \"kubernetes.io/projected/08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9-kube-api-access-g7qcw\") pod \"08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9\" (UID: \"08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9\") " Jan 26 11:52:06 crc kubenswrapper[4619]: I0126 11:52:06.542343 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9-utilities" (OuterVolumeSpecName: "utilities") pod "08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9" (UID: "08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:52:06 crc kubenswrapper[4619]: I0126 11:52:06.565421 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9" (UID: "08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:52:06 crc kubenswrapper[4619]: I0126 11:52:06.576253 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9-kube-api-access-g7qcw" (OuterVolumeSpecName: "kube-api-access-g7qcw") pod "08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9" (UID: "08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9"). InnerVolumeSpecName "kube-api-access-g7qcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:52:06 crc kubenswrapper[4619]: I0126 11:52:06.643097 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:52:06 crc kubenswrapper[4619]: I0126 11:52:06.643140 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7qcw\" (UniqueName: \"kubernetes.io/projected/08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9-kube-api-access-g7qcw\") on node \"crc\" DevicePath \"\"" Jan 26 11:52:06 crc kubenswrapper[4619]: I0126 11:52:06.643153 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:52:06 crc kubenswrapper[4619]: I0126 11:52:06.949973 4619 generic.go:334] "Generic (PLEG): container finished" podID="08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9" containerID="777a5241d76a5ef63806e39c97083a15a1b6684866ea7515d03e2fd1b9ba1ef8" exitCode=0 Jan 26 11:52:06 crc kubenswrapper[4619]: I0126 11:52:06.950015 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtxmp" event={"ID":"08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9","Type":"ContainerDied","Data":"777a5241d76a5ef63806e39c97083a15a1b6684866ea7515d03e2fd1b9ba1ef8"} Jan 26 11:52:06 crc kubenswrapper[4619]: I0126 11:52:06.950061 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtxmp" event={"ID":"08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9","Type":"ContainerDied","Data":"5b9556a9abc03abd0c11abc54fc1c98c1bf552fcf21ba536e3fe510aaef3b4ed"} Jan 26 11:52:06 crc kubenswrapper[4619]: I0126 11:52:06.950081 4619 scope.go:117] "RemoveContainer" containerID="777a5241d76a5ef63806e39c97083a15a1b6684866ea7515d03e2fd1b9ba1ef8" Jan 26 11:52:06 crc kubenswrapper[4619]: I0126 11:52:06.950079 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtxmp" Jan 26 11:52:06 crc kubenswrapper[4619]: I0126 11:52:06.990148 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtxmp"] Jan 26 11:52:06 crc kubenswrapper[4619]: I0126 11:52:06.993807 4619 scope.go:117] "RemoveContainer" containerID="65398f8ff3c675582bd13f9648aafeb7ffa0351c31d3b9b79e274c045cef7446" Jan 26 11:52:07 crc kubenswrapper[4619]: I0126 11:52:07.008843 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtxmp"] Jan 26 11:52:07 crc kubenswrapper[4619]: I0126 11:52:07.036898 4619 scope.go:117] "RemoveContainer" containerID="10e641f67a48e5cf02270accee232f37f7cb8cbd161c344cd3120fd568ce5d4b" Jan 26 11:52:07 crc kubenswrapper[4619]: I0126 11:52:07.084813 4619 scope.go:117] "RemoveContainer" containerID="777a5241d76a5ef63806e39c97083a15a1b6684866ea7515d03e2fd1b9ba1ef8" Jan 26 11:52:07 crc kubenswrapper[4619]: E0126 11:52:07.090901 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"777a5241d76a5ef63806e39c97083a15a1b6684866ea7515d03e2fd1b9ba1ef8\": container with ID starting with 777a5241d76a5ef63806e39c97083a15a1b6684866ea7515d03e2fd1b9ba1ef8 not found: ID does not exist" containerID="777a5241d76a5ef63806e39c97083a15a1b6684866ea7515d03e2fd1b9ba1ef8" Jan 26 11:52:07 crc kubenswrapper[4619]: I0126 11:52:07.090936 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"777a5241d76a5ef63806e39c97083a15a1b6684866ea7515d03e2fd1b9ba1ef8"} err="failed to get container status \"777a5241d76a5ef63806e39c97083a15a1b6684866ea7515d03e2fd1b9ba1ef8\": rpc error: code = NotFound desc = could not find container \"777a5241d76a5ef63806e39c97083a15a1b6684866ea7515d03e2fd1b9ba1ef8\": container with ID starting with 777a5241d76a5ef63806e39c97083a15a1b6684866ea7515d03e2fd1b9ba1ef8 not found: ID does not exist" Jan 26 11:52:07 crc kubenswrapper[4619]: I0126 11:52:07.090959 4619 scope.go:117] "RemoveContainer" containerID="65398f8ff3c675582bd13f9648aafeb7ffa0351c31d3b9b79e274c045cef7446" Jan 26 11:52:07 crc kubenswrapper[4619]: E0126 11:52:07.091338 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65398f8ff3c675582bd13f9648aafeb7ffa0351c31d3b9b79e274c045cef7446\": container with ID starting with 65398f8ff3c675582bd13f9648aafeb7ffa0351c31d3b9b79e274c045cef7446 not found: ID does not exist" containerID="65398f8ff3c675582bd13f9648aafeb7ffa0351c31d3b9b79e274c045cef7446" Jan 26 11:52:07 crc kubenswrapper[4619]: I0126 11:52:07.091365 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65398f8ff3c675582bd13f9648aafeb7ffa0351c31d3b9b79e274c045cef7446"} err="failed to get container status \"65398f8ff3c675582bd13f9648aafeb7ffa0351c31d3b9b79e274c045cef7446\": rpc error: code = NotFound desc = could not find container \"65398f8ff3c675582bd13f9648aafeb7ffa0351c31d3b9b79e274c045cef7446\": container with ID starting with 65398f8ff3c675582bd13f9648aafeb7ffa0351c31d3b9b79e274c045cef7446 not found: ID does not exist" Jan 26 11:52:07 crc kubenswrapper[4619]: I0126 11:52:07.091382 4619 scope.go:117] "RemoveContainer" containerID="10e641f67a48e5cf02270accee232f37f7cb8cbd161c344cd3120fd568ce5d4b" Jan 26 11:52:07 crc kubenswrapper[4619]: E0126 11:52:07.091721 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10e641f67a48e5cf02270accee232f37f7cb8cbd161c344cd3120fd568ce5d4b\": container with ID starting with 10e641f67a48e5cf02270accee232f37f7cb8cbd161c344cd3120fd568ce5d4b not found: ID does not exist" containerID="10e641f67a48e5cf02270accee232f37f7cb8cbd161c344cd3120fd568ce5d4b" Jan 26 11:52:07 crc kubenswrapper[4619]: I0126 11:52:07.091784 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10e641f67a48e5cf02270accee232f37f7cb8cbd161c344cd3120fd568ce5d4b"} err="failed to get container status \"10e641f67a48e5cf02270accee232f37f7cb8cbd161c344cd3120fd568ce5d4b\": rpc error: code = NotFound desc = could not find container \"10e641f67a48e5cf02270accee232f37f7cb8cbd161c344cd3120fd568ce5d4b\": container with ID starting with 10e641f67a48e5cf02270accee232f37f7cb8cbd161c344cd3120fd568ce5d4b not found: ID does not exist" Jan 26 11:52:07 crc kubenswrapper[4619]: I0126 11:52:07.273953 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9" path="/var/lib/kubelet/pods/08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9/volumes" Jan 26 11:52:14 crc kubenswrapper[4619]: I0126 11:52:14.234769 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:52:14 crc kubenswrapper[4619]: I0126 11:52:14.235401 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:52:27 crc kubenswrapper[4619]: I0126 11:52:27.121042 4619 generic.go:334] "Generic (PLEG): container finished" podID="3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9" containerID="9533b297f10d12b67b6c836734389e94d9e0944250d751e3dbb1c54aa31a3f13" exitCode=0 Jan 26 11:52:27 crc kubenswrapper[4619]: I0126 11:52:27.121174 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cv5tk/crc-debug-hx25k" event={"ID":"3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9","Type":"ContainerDied","Data":"9533b297f10d12b67b6c836734389e94d9e0944250d751e3dbb1c54aa31a3f13"} Jan 26 11:52:28 crc kubenswrapper[4619]: I0126 11:52:28.225016 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cv5tk/crc-debug-hx25k" Jan 26 11:52:28 crc kubenswrapper[4619]: I0126 11:52:28.260529 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cv5tk/crc-debug-hx25k"] Jan 26 11:52:28 crc kubenswrapper[4619]: I0126 11:52:28.267533 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cv5tk/crc-debug-hx25k"] Jan 26 11:52:28 crc kubenswrapper[4619]: I0126 11:52:28.342925 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbx65\" (UniqueName: \"kubernetes.io/projected/3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9-kube-api-access-nbx65\") pod \"3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9\" (UID: \"3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9\") " Jan 26 11:52:28 crc kubenswrapper[4619]: I0126 11:52:28.343039 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9-host\") pod \"3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9\" (UID: \"3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9\") " Jan 26 11:52:28 crc kubenswrapper[4619]: I0126 11:52:28.343152 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9-host" (OuterVolumeSpecName: "host") pod "3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9" (UID: "3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:52:28 crc kubenswrapper[4619]: I0126 11:52:28.343859 4619 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9-host\") on node \"crc\" DevicePath \"\"" Jan 26 11:52:28 crc kubenswrapper[4619]: I0126 11:52:28.348659 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9-kube-api-access-nbx65" (OuterVolumeSpecName: "kube-api-access-nbx65") pod "3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9" (UID: "3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9"). InnerVolumeSpecName "kube-api-access-nbx65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:52:28 crc kubenswrapper[4619]: I0126 11:52:28.445441 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbx65\" (UniqueName: \"kubernetes.io/projected/3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9-kube-api-access-nbx65\") on node \"crc\" DevicePath \"\"" Jan 26 11:52:29 crc kubenswrapper[4619]: I0126 11:52:29.141344 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc7639e01980dbe52e9f2345b21a8f8cfa676cc1e00982fa6fd69e1066566ed4" Jan 26 11:52:29 crc kubenswrapper[4619]: I0126 11:52:29.141442 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cv5tk/crc-debug-hx25k" Jan 26 11:52:29 crc kubenswrapper[4619]: I0126 11:52:29.276829 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9" path="/var/lib/kubelet/pods/3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9/volumes" Jan 26 11:52:29 crc kubenswrapper[4619]: I0126 11:52:29.494121 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cv5tk/crc-debug-5v7h6"] Jan 26 11:52:29 crc kubenswrapper[4619]: E0126 11:52:29.494924 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9" containerName="registry-server" Jan 26 11:52:29 crc kubenswrapper[4619]: I0126 11:52:29.495023 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9" containerName="registry-server" Jan 26 11:52:29 crc kubenswrapper[4619]: E0126 11:52:29.495101 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9" containerName="extract-utilities" Jan 26 11:52:29 crc kubenswrapper[4619]: I0126 11:52:29.495159 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9" containerName="extract-utilities" Jan 26 11:52:29 crc kubenswrapper[4619]: E0126 11:52:29.495227 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9" containerName="container-00" Jan 26 11:52:29 crc kubenswrapper[4619]: I0126 11:52:29.495284 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9" containerName="container-00" Jan 26 11:52:29 crc kubenswrapper[4619]: E0126 11:52:29.495351 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9" containerName="extract-content" Jan 26 11:52:29 crc kubenswrapper[4619]: I0126 11:52:29.495410 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9" containerName="extract-content" Jan 26 11:52:29 crc kubenswrapper[4619]: I0126 11:52:29.495655 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d5ee3a4-0b31-490f-b7a8-14139dbbf6e9" containerName="container-00" Jan 26 11:52:29 crc kubenswrapper[4619]: I0126 11:52:29.495744 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="08fa8ab1-c8dd-4238-a1a7-70aabe42d2c9" containerName="registry-server" Jan 26 11:52:29 crc kubenswrapper[4619]: I0126 11:52:29.496407 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cv5tk/crc-debug-5v7h6" Jan 26 11:52:29 crc kubenswrapper[4619]: I0126 11:52:29.669848 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwgft\" (UniqueName: \"kubernetes.io/projected/0d112993-0bc6-4918-b8fa-eb83c61a6345-kube-api-access-zwgft\") pod \"crc-debug-5v7h6\" (UID: \"0d112993-0bc6-4918-b8fa-eb83c61a6345\") " pod="openshift-must-gather-cv5tk/crc-debug-5v7h6" Jan 26 11:52:29 crc kubenswrapper[4619]: I0126 11:52:29.670339 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0d112993-0bc6-4918-b8fa-eb83c61a6345-host\") pod \"crc-debug-5v7h6\" (UID: \"0d112993-0bc6-4918-b8fa-eb83c61a6345\") " pod="openshift-must-gather-cv5tk/crc-debug-5v7h6" Jan 26 11:52:29 crc kubenswrapper[4619]: I0126 11:52:29.771848 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0d112993-0bc6-4918-b8fa-eb83c61a6345-host\") pod \"crc-debug-5v7h6\" (UID: \"0d112993-0bc6-4918-b8fa-eb83c61a6345\") " pod="openshift-must-gather-cv5tk/crc-debug-5v7h6" Jan 26 11:52:29 crc kubenswrapper[4619]: I0126 11:52:29.772002 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwgft\" (UniqueName: \"kubernetes.io/projected/0d112993-0bc6-4918-b8fa-eb83c61a6345-kube-api-access-zwgft\") pod \"crc-debug-5v7h6\" (UID: \"0d112993-0bc6-4918-b8fa-eb83c61a6345\") " pod="openshift-must-gather-cv5tk/crc-debug-5v7h6" Jan 26 11:52:29 crc kubenswrapper[4619]: I0126 11:52:29.772018 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0d112993-0bc6-4918-b8fa-eb83c61a6345-host\") pod \"crc-debug-5v7h6\" (UID: \"0d112993-0bc6-4918-b8fa-eb83c61a6345\") " pod="openshift-must-gather-cv5tk/crc-debug-5v7h6" Jan 26 11:52:29 crc kubenswrapper[4619]: I0126 11:52:29.795374 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwgft\" (UniqueName: \"kubernetes.io/projected/0d112993-0bc6-4918-b8fa-eb83c61a6345-kube-api-access-zwgft\") pod \"crc-debug-5v7h6\" (UID: \"0d112993-0bc6-4918-b8fa-eb83c61a6345\") " pod="openshift-must-gather-cv5tk/crc-debug-5v7h6" Jan 26 11:52:29 crc kubenswrapper[4619]: I0126 11:52:29.814049 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cv5tk/crc-debug-5v7h6" Jan 26 11:52:30 crc kubenswrapper[4619]: I0126 11:52:30.155016 4619 generic.go:334] "Generic (PLEG): container finished" podID="0d112993-0bc6-4918-b8fa-eb83c61a6345" containerID="182d88792a865685755d03fd9b1c94ff5ed4f0775a4009bc6c77156f00c9873c" exitCode=0 Jan 26 11:52:30 crc kubenswrapper[4619]: I0126 11:52:30.155094 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cv5tk/crc-debug-5v7h6" event={"ID":"0d112993-0bc6-4918-b8fa-eb83c61a6345","Type":"ContainerDied","Data":"182d88792a865685755d03fd9b1c94ff5ed4f0775a4009bc6c77156f00c9873c"} Jan 26 11:52:30 crc kubenswrapper[4619]: I0126 11:52:30.155409 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cv5tk/crc-debug-5v7h6" event={"ID":"0d112993-0bc6-4918-b8fa-eb83c61a6345","Type":"ContainerStarted","Data":"0d86b1935f7878ec08fdd7e8bc0a108dd3a7f89a781bcd66d1e121ef2a890348"} Jan 26 11:52:30 crc kubenswrapper[4619]: I0126 11:52:30.594820 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cv5tk/crc-debug-5v7h6"] Jan 26 11:52:30 crc kubenswrapper[4619]: I0126 11:52:30.603422 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cv5tk/crc-debug-5v7h6"] Jan 26 11:52:31 crc kubenswrapper[4619]: I0126 11:52:31.253220 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cv5tk/crc-debug-5v7h6" Jan 26 11:52:31 crc kubenswrapper[4619]: I0126 11:52:31.301125 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwgft\" (UniqueName: \"kubernetes.io/projected/0d112993-0bc6-4918-b8fa-eb83c61a6345-kube-api-access-zwgft\") pod \"0d112993-0bc6-4918-b8fa-eb83c61a6345\" (UID: \"0d112993-0bc6-4918-b8fa-eb83c61a6345\") " Jan 26 11:52:31 crc kubenswrapper[4619]: I0126 11:52:31.301304 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0d112993-0bc6-4918-b8fa-eb83c61a6345-host\") pod \"0d112993-0bc6-4918-b8fa-eb83c61a6345\" (UID: \"0d112993-0bc6-4918-b8fa-eb83c61a6345\") " Jan 26 11:52:31 crc kubenswrapper[4619]: I0126 11:52:31.301674 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d112993-0bc6-4918-b8fa-eb83c61a6345-host" (OuterVolumeSpecName: "host") pod "0d112993-0bc6-4918-b8fa-eb83c61a6345" (UID: "0d112993-0bc6-4918-b8fa-eb83c61a6345"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:52:31 crc kubenswrapper[4619]: I0126 11:52:31.301922 4619 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0d112993-0bc6-4918-b8fa-eb83c61a6345-host\") on node \"crc\" DevicePath \"\"" Jan 26 11:52:31 crc kubenswrapper[4619]: I0126 11:52:31.307205 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d112993-0bc6-4918-b8fa-eb83c61a6345-kube-api-access-zwgft" (OuterVolumeSpecName: "kube-api-access-zwgft") pod "0d112993-0bc6-4918-b8fa-eb83c61a6345" (UID: "0d112993-0bc6-4918-b8fa-eb83c61a6345"). InnerVolumeSpecName "kube-api-access-zwgft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:52:31 crc kubenswrapper[4619]: I0126 11:52:31.403738 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwgft\" (UniqueName: \"kubernetes.io/projected/0d112993-0bc6-4918-b8fa-eb83c61a6345-kube-api-access-zwgft\") on node \"crc\" DevicePath \"\"" Jan 26 11:52:31 crc kubenswrapper[4619]: I0126 11:52:31.809436 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cv5tk/crc-debug-dh5qb"] Jan 26 11:52:31 crc kubenswrapper[4619]: E0126 11:52:31.810030 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d112993-0bc6-4918-b8fa-eb83c61a6345" containerName="container-00" Jan 26 11:52:31 crc kubenswrapper[4619]: I0126 11:52:31.810053 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d112993-0bc6-4918-b8fa-eb83c61a6345" containerName="container-00" Jan 26 11:52:31 crc kubenswrapper[4619]: I0126 11:52:31.810302 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d112993-0bc6-4918-b8fa-eb83c61a6345" containerName="container-00" Jan 26 11:52:31 crc kubenswrapper[4619]: I0126 11:52:31.810972 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cv5tk/crc-debug-dh5qb" Jan 26 11:52:31 crc kubenswrapper[4619]: I0126 11:52:31.913310 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/68d8d822-37a3-469c-8abf-aca170ac69c7-host\") pod \"crc-debug-dh5qb\" (UID: \"68d8d822-37a3-469c-8abf-aca170ac69c7\") " pod="openshift-must-gather-cv5tk/crc-debug-dh5qb" Jan 26 11:52:31 crc kubenswrapper[4619]: I0126 11:52:31.913663 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq5s4\" (UniqueName: \"kubernetes.io/projected/68d8d822-37a3-469c-8abf-aca170ac69c7-kube-api-access-dq5s4\") pod \"crc-debug-dh5qb\" (UID: \"68d8d822-37a3-469c-8abf-aca170ac69c7\") " pod="openshift-must-gather-cv5tk/crc-debug-dh5qb" Jan 26 11:52:32 crc kubenswrapper[4619]: I0126 11:52:32.015467 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/68d8d822-37a3-469c-8abf-aca170ac69c7-host\") pod \"crc-debug-dh5qb\" (UID: \"68d8d822-37a3-469c-8abf-aca170ac69c7\") " pod="openshift-must-gather-cv5tk/crc-debug-dh5qb" Jan 26 11:52:32 crc kubenswrapper[4619]: I0126 11:52:32.015534 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq5s4\" (UniqueName: \"kubernetes.io/projected/68d8d822-37a3-469c-8abf-aca170ac69c7-kube-api-access-dq5s4\") pod \"crc-debug-dh5qb\" (UID: \"68d8d822-37a3-469c-8abf-aca170ac69c7\") " pod="openshift-must-gather-cv5tk/crc-debug-dh5qb" Jan 26 11:52:32 crc kubenswrapper[4619]: I0126 11:52:32.015640 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/68d8d822-37a3-469c-8abf-aca170ac69c7-host\") pod \"crc-debug-dh5qb\" (UID: \"68d8d822-37a3-469c-8abf-aca170ac69c7\") " pod="openshift-must-gather-cv5tk/crc-debug-dh5qb" Jan 26 11:52:32 crc kubenswrapper[4619]: I0126 11:52:32.039407 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq5s4\" (UniqueName: \"kubernetes.io/projected/68d8d822-37a3-469c-8abf-aca170ac69c7-kube-api-access-dq5s4\") pod \"crc-debug-dh5qb\" (UID: \"68d8d822-37a3-469c-8abf-aca170ac69c7\") " pod="openshift-must-gather-cv5tk/crc-debug-dh5qb" Jan 26 11:52:32 crc kubenswrapper[4619]: I0126 11:52:32.132728 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cv5tk/crc-debug-dh5qb" Jan 26 11:52:32 crc kubenswrapper[4619]: I0126 11:52:32.175577 4619 scope.go:117] "RemoveContainer" containerID="182d88792a865685755d03fd9b1c94ff5ed4f0775a4009bc6c77156f00c9873c" Jan 26 11:52:32 crc kubenswrapper[4619]: I0126 11:52:32.175677 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cv5tk/crc-debug-5v7h6" Jan 26 11:52:33 crc kubenswrapper[4619]: I0126 11:52:33.187551 4619 generic.go:334] "Generic (PLEG): container finished" podID="68d8d822-37a3-469c-8abf-aca170ac69c7" containerID="116ee0a728a576ec352ba58b35a870a5e98e47921f0217a611f810736366d397" exitCode=0 Jan 26 11:52:33 crc kubenswrapper[4619]: I0126 11:52:33.188931 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cv5tk/crc-debug-dh5qb" event={"ID":"68d8d822-37a3-469c-8abf-aca170ac69c7","Type":"ContainerDied","Data":"116ee0a728a576ec352ba58b35a870a5e98e47921f0217a611f810736366d397"} Jan 26 11:52:33 crc kubenswrapper[4619]: I0126 11:52:33.189067 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cv5tk/crc-debug-dh5qb" event={"ID":"68d8d822-37a3-469c-8abf-aca170ac69c7","Type":"ContainerStarted","Data":"25398cbc2ad0582f152bffa82360d4319f0f762aabb6506ea374d7267142d30b"} Jan 26 11:52:33 crc kubenswrapper[4619]: I0126 11:52:33.232764 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cv5tk/crc-debug-dh5qb"] Jan 26 11:52:33 crc kubenswrapper[4619]: I0126 11:52:33.243333 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cv5tk/crc-debug-dh5qb"] Jan 26 11:52:33 crc kubenswrapper[4619]: I0126 11:52:33.282894 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d112993-0bc6-4918-b8fa-eb83c61a6345" path="/var/lib/kubelet/pods/0d112993-0bc6-4918-b8fa-eb83c61a6345/volumes" Jan 26 11:52:34 crc kubenswrapper[4619]: I0126 11:52:34.292821 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cv5tk/crc-debug-dh5qb" Jan 26 11:52:34 crc kubenswrapper[4619]: I0126 11:52:34.365070 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/68d8d822-37a3-469c-8abf-aca170ac69c7-host\") pod \"68d8d822-37a3-469c-8abf-aca170ac69c7\" (UID: \"68d8d822-37a3-469c-8abf-aca170ac69c7\") " Jan 26 11:52:34 crc kubenswrapper[4619]: I0126 11:52:34.365167 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq5s4\" (UniqueName: \"kubernetes.io/projected/68d8d822-37a3-469c-8abf-aca170ac69c7-kube-api-access-dq5s4\") pod \"68d8d822-37a3-469c-8abf-aca170ac69c7\" (UID: \"68d8d822-37a3-469c-8abf-aca170ac69c7\") " Jan 26 11:52:34 crc kubenswrapper[4619]: I0126 11:52:34.366022 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68d8d822-37a3-469c-8abf-aca170ac69c7-host" (OuterVolumeSpecName: "host") pod "68d8d822-37a3-469c-8abf-aca170ac69c7" (UID: "68d8d822-37a3-469c-8abf-aca170ac69c7"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:52:34 crc kubenswrapper[4619]: I0126 11:52:34.371576 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68d8d822-37a3-469c-8abf-aca170ac69c7-kube-api-access-dq5s4" (OuterVolumeSpecName: "kube-api-access-dq5s4") pod "68d8d822-37a3-469c-8abf-aca170ac69c7" (UID: "68d8d822-37a3-469c-8abf-aca170ac69c7"). InnerVolumeSpecName "kube-api-access-dq5s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:52:34 crc kubenswrapper[4619]: I0126 11:52:34.466958 4619 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/68d8d822-37a3-469c-8abf-aca170ac69c7-host\") on node \"crc\" DevicePath \"\"" Jan 26 11:52:34 crc kubenswrapper[4619]: I0126 11:52:34.466988 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq5s4\" (UniqueName: \"kubernetes.io/projected/68d8d822-37a3-469c-8abf-aca170ac69c7-kube-api-access-dq5s4\") on node \"crc\" DevicePath \"\"" Jan 26 11:52:35 crc kubenswrapper[4619]: I0126 11:52:35.210944 4619 scope.go:117] "RemoveContainer" containerID="116ee0a728a576ec352ba58b35a870a5e98e47921f0217a611f810736366d397" Jan 26 11:52:35 crc kubenswrapper[4619]: I0126 11:52:35.210998 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cv5tk/crc-debug-dh5qb" Jan 26 11:52:35 crc kubenswrapper[4619]: I0126 11:52:35.285263 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68d8d822-37a3-469c-8abf-aca170ac69c7" path="/var/lib/kubelet/pods/68d8d822-37a3-469c-8abf-aca170ac69c7/volumes" Jan 26 11:52:44 crc kubenswrapper[4619]: I0126 11:52:44.234733 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:52:44 crc kubenswrapper[4619]: I0126 11:52:44.235276 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:52:52 crc kubenswrapper[4619]: I0126 11:52:52.798914 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-668f5b9c84-9qvth_16edc018-6152-42d9-aa2d-70de2c9851f3/barbican-api/0.log" Jan 26 11:52:53 crc kubenswrapper[4619]: I0126 11:52:53.077405 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5cd59c79cd-lqtz6_827c156d-633b-414a-93ef-07d73ba79785/barbican-keystone-listener/0.log" Jan 26 11:52:53 crc kubenswrapper[4619]: I0126 11:52:53.257133 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5cd59c79cd-lqtz6_827c156d-633b-414a-93ef-07d73ba79785/barbican-keystone-listener-log/0.log" Jan 26 11:52:53 crc kubenswrapper[4619]: I0126 11:52:53.476874 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-668f5b9c84-9qvth_16edc018-6152-42d9-aa2d-70de2c9851f3/barbican-api-log/0.log" Jan 26 11:52:53 crc kubenswrapper[4619]: I0126 11:52:53.529007 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5b76d57d79-c2tm5_cd4a2072-c71c-42f6-940e-35435fc350c7/barbican-worker/0.log" Jan 26 11:52:53 crc kubenswrapper[4619]: I0126 11:52:53.627826 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5b76d57d79-c2tm5_cd4a2072-c71c-42f6-940e-35435fc350c7/barbican-worker-log/0.log" Jan 26 11:52:53 crc kubenswrapper[4619]: I0126 11:52:53.712182 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr_12059c45-fc17-45cc-a061-a1b5ea704285/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 11:52:53 crc kubenswrapper[4619]: I0126 11:52:53.847248 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c931ebbd-0d84-4a52-9672-d62698618f7f/ceilometer-central-agent/0.log" Jan 26 11:52:53 crc kubenswrapper[4619]: I0126 11:52:53.916710 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c931ebbd-0d84-4a52-9672-d62698618f7f/ceilometer-notification-agent/0.log" Jan 26 11:52:53 crc kubenswrapper[4619]: I0126 11:52:53.940588 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c931ebbd-0d84-4a52-9672-d62698618f7f/proxy-httpd/0.log" Jan 26 11:52:54 crc kubenswrapper[4619]: I0126 11:52:54.128033 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c931ebbd-0d84-4a52-9672-d62698618f7f/sg-core/0.log" Jan 26 11:52:54 crc kubenswrapper[4619]: I0126 11:52:54.162873 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_18a1159e-53c5-4f13-9b4d-c6912b11fe46/cinder-api-log/0.log" Jan 26 11:52:54 crc kubenswrapper[4619]: I0126 11:52:54.211775 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_18a1159e-53c5-4f13-9b4d-c6912b11fe46/cinder-api/0.log" Jan 26 11:52:54 crc kubenswrapper[4619]: I0126 11:52:54.426894 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_519b14a3-af8d-4238-9bc0-69e13bae0a9e/cinder-scheduler/0.log" Jan 26 11:52:54 crc kubenswrapper[4619]: I0126 11:52:54.501411 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_519b14a3-af8d-4238-9bc0-69e13bae0a9e/probe/0.log" Jan 26 11:52:54 crc kubenswrapper[4619]: I0126 11:52:54.593238 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb_352a4117-3bba-4714-a367-916874cba86f/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 11:52:54 crc kubenswrapper[4619]: I0126 11:52:54.792210 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-858f8_51653ef8-78e6-4e44-9391-e815c9d092bf/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 11:52:54 crc kubenswrapper[4619]: I0126 11:52:54.855517 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6cd9bffc9-fgwpp_1e69603e-4c04-4273-8c8c-b71255c1f370/init/0.log" Jan 26 11:52:55 crc kubenswrapper[4619]: I0126 11:52:55.046530 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6cd9bffc9-fgwpp_1e69603e-4c04-4273-8c8c-b71255c1f370/init/0.log" Jan 26 11:52:55 crc kubenswrapper[4619]: I0126 11:52:55.078279 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6cd9bffc9-fgwpp_1e69603e-4c04-4273-8c8c-b71255c1f370/dnsmasq-dns/0.log" Jan 26 11:52:55 crc kubenswrapper[4619]: I0126 11:52:55.175636 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l_ac31ccc2-07ae-4326-80dd-12b4e8393331/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 11:52:55 crc kubenswrapper[4619]: I0126 11:52:55.557339 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_27d84c05-55fb-4f3a-a363-aa137f111de7/glance-httpd/0.log" Jan 26 11:52:55 crc kubenswrapper[4619]: I0126 11:52:55.659905 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_27d84c05-55fb-4f3a-a363-aa137f111de7/glance-log/0.log" Jan 26 11:52:55 crc kubenswrapper[4619]: I0126 11:52:55.858514 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153/glance-log/0.log" Jan 26 11:52:55 crc kubenswrapper[4619]: I0126 11:52:55.898000 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153/glance-httpd/0.log" Jan 26 11:52:56 crc kubenswrapper[4619]: I0126 11:52:56.092032 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-846d64d6c4-66jvl_10c8ed10-dab5-49e5-a030-4be99c720ae0/horizon/1.log" Jan 26 11:52:56 crc kubenswrapper[4619]: I0126 11:52:56.308042 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-846d64d6c4-66jvl_10c8ed10-dab5-49e5-a030-4be99c720ae0/horizon/0.log" Jan 26 11:52:56 crc kubenswrapper[4619]: I0126 11:52:56.431584 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4_09e6155b-11d0-4cab-83c5-c215bac7c5d8/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 11:52:56 crc kubenswrapper[4619]: I0126 11:52:56.509473 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-846d64d6c4-66jvl_10c8ed10-dab5-49e5-a030-4be99c720ae0/horizon-log/0.log" Jan 26 11:52:56 crc kubenswrapper[4619]: I0126 11:52:56.665485 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-c4v2v_e0c6e648-dad5-48ab-8eb3-0e40a9225e9e/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 11:52:56 crc kubenswrapper[4619]: I0126 11:52:56.948444 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-75ddf854f7-wtpq9_29fb1b8f-bf8f-456d-8e56-8fded6d074a1/keystone-api/0.log" Jan 26 11:52:57 crc kubenswrapper[4619]: I0126 11:52:57.153343 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_8f4bc98f-79c3-4192-973d-32d8df967077/kube-state-metrics/0.log" Jan 26 11:52:57 crc kubenswrapper[4619]: I0126 11:52:57.302219 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-h97ld_eaa2c414-823b-48a9-a59d-1f02d1708f9f/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 11:52:57 crc kubenswrapper[4619]: I0126 11:52:57.705898 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-54d868dd9-v7bwm_f5841244-b607-41b5-981c-1bb78b997411/neutron-httpd/0.log" Jan 26 11:52:57 crc kubenswrapper[4619]: I0126 11:52:57.750921 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-54d868dd9-v7bwm_f5841244-b607-41b5-981c-1bb78b997411/neutron-api/0.log" Jan 26 11:52:57 crc kubenswrapper[4619]: I0126 11:52:57.878208 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6_5f530175-ddda-4a1c-a437-af3747bb0da9/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 11:52:58 crc kubenswrapper[4619]: I0126 11:52:58.269186 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a99ba972-f513-421c-b25d-c8ecbc095c0f/nova-api-log/0.log" Jan 26 11:52:58 crc kubenswrapper[4619]: I0126 11:52:58.447717 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_50392ffb-8c95-4c47-97e9-03d27141e8e8/nova-cell0-conductor-conductor/0.log" Jan 26 11:52:58 crc kubenswrapper[4619]: I0126 11:52:58.462043 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a99ba972-f513-421c-b25d-c8ecbc095c0f/nova-api-api/0.log" Jan 26 11:52:58 crc kubenswrapper[4619]: I0126 11:52:58.800437 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_65f97f5d-163a-469b-b63e-f2763404b64c/nova-cell1-conductor-conductor/0.log" Jan 26 11:52:58 crc kubenswrapper[4619]: I0126 11:52:58.895419 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_2c34909b-2fd9-4e80-b0ef-9dbf87382ee7/nova-cell1-novncproxy-novncproxy/0.log" Jan 26 11:52:59 crc kubenswrapper[4619]: I0126 11:52:59.427942 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-xjjng_b641ed88-2b99-4794-a48d-906d2355417d/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 11:52:59 crc kubenswrapper[4619]: I0126 11:52:59.471079 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_5755883f-06f0-4bf0-888d-2742d71ddf6c/nova-metadata-log/0.log" Jan 26 11:52:59 crc kubenswrapper[4619]: I0126 11:52:59.767271 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_a018ea11-c0b7-4523-b3f4-1367bb0073fd/nova-scheduler-scheduler/0.log" Jan 26 11:53:00 crc kubenswrapper[4619]: I0126 11:53:00.007368 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_675ad44b-ca9d-4f4c-947b-06184a5db736/mysql-bootstrap/0.log" Jan 26 11:53:00 crc kubenswrapper[4619]: I0126 11:53:00.164419 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_675ad44b-ca9d-4f4c-947b-06184a5db736/mysql-bootstrap/0.log" Jan 26 11:53:00 crc kubenswrapper[4619]: I0126 11:53:00.199046 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_675ad44b-ca9d-4f4c-947b-06184a5db736/galera/0.log" Jan 26 11:53:00 crc kubenswrapper[4619]: I0126 11:53:00.444901 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8f5811c2-1a5b-4fc0-aa98-a6604f266891/mysql-bootstrap/0.log" Jan 26 11:53:00 crc kubenswrapper[4619]: I0126 11:53:00.586882 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8f5811c2-1a5b-4fc0-aa98-a6604f266891/mysql-bootstrap/0.log" Jan 26 11:53:00 crc kubenswrapper[4619]: I0126 11:53:00.639949 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_5755883f-06f0-4bf0-888d-2742d71ddf6c/nova-metadata-metadata/0.log" Jan 26 11:53:00 crc kubenswrapper[4619]: I0126 11:53:00.687958 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8f5811c2-1a5b-4fc0-aa98-a6604f266891/galera/0.log" Jan 26 11:53:00 crc kubenswrapper[4619]: I0126 11:53:00.860188 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_5a4db787-7749-4a67-a52a-b8c4f3229c65/openstackclient/0.log" Jan 26 11:53:00 crc kubenswrapper[4619]: I0126 11:53:00.998031 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-djjzm_b814fe04-5ad5-4a1f-b49b-9f38ea5be2da/ovn-controller/0.log" Jan 26 11:53:01 crc kubenswrapper[4619]: I0126 11:53:01.156299 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-gbr5p_2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc/openstack-network-exporter/0.log" Jan 26 11:53:01 crc kubenswrapper[4619]: I0126 11:53:01.202927 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sq2gq_1778a60a-b3d9-4f16-a8d4-8c0adf54524f/ovsdb-server-init/0.log" Jan 26 11:53:01 crc kubenswrapper[4619]: I0126 11:53:01.498106 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sq2gq_1778a60a-b3d9-4f16-a8d4-8c0adf54524f/ovsdb-server-init/0.log" Jan 26 11:53:01 crc kubenswrapper[4619]: I0126 11:53:01.504746 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sq2gq_1778a60a-b3d9-4f16-a8d4-8c0adf54524f/ovsdb-server/0.log" Jan 26 11:53:01 crc kubenswrapper[4619]: I0126 11:53:01.570142 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sq2gq_1778a60a-b3d9-4f16-a8d4-8c0adf54524f/ovs-vswitchd/0.log" Jan 26 11:53:01 crc kubenswrapper[4619]: I0126 11:53:01.794948 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_189b0401-ae3e-44f3-bdcc-9991a88716e8/openstack-network-exporter/0.log" Jan 26 11:53:01 crc kubenswrapper[4619]: I0126 11:53:01.802963 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-4dhp4_1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 11:53:01 crc kubenswrapper[4619]: I0126 11:53:01.864166 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_189b0401-ae3e-44f3-bdcc-9991a88716e8/ovn-northd/0.log" Jan 26 11:53:01 crc kubenswrapper[4619]: I0126 11:53:01.995798 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_cc19a957-aa75-443a-bd3a-2696241ffbd1/openstack-network-exporter/0.log" Jan 26 11:53:02 crc kubenswrapper[4619]: I0126 11:53:02.109146 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_cc19a957-aa75-443a-bd3a-2696241ffbd1/ovsdbserver-nb/0.log" Jan 26 11:53:02 crc kubenswrapper[4619]: I0126 11:53:02.263631 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g5vzb"] Jan 26 11:53:02 crc kubenswrapper[4619]: E0126 11:53:02.264019 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68d8d822-37a3-469c-8abf-aca170ac69c7" containerName="container-00" Jan 26 11:53:02 crc kubenswrapper[4619]: I0126 11:53:02.264035 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="68d8d822-37a3-469c-8abf-aca170ac69c7" containerName="container-00" Jan 26 11:53:02 crc kubenswrapper[4619]: I0126 11:53:02.264229 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="68d8d822-37a3-469c-8abf-aca170ac69c7" containerName="container-00" Jan 26 11:53:02 crc kubenswrapper[4619]: I0126 11:53:02.266537 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g5vzb" Jan 26 11:53:02 crc kubenswrapper[4619]: I0126 11:53:02.283303 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g5vzb"] Jan 26 11:53:02 crc kubenswrapper[4619]: I0126 11:53:02.367032 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66845a3a-aec4-4f8a-ac93-94f0c83a6d72-catalog-content\") pod \"certified-operators-g5vzb\" (UID: \"66845a3a-aec4-4f8a-ac93-94f0c83a6d72\") " pod="openshift-marketplace/certified-operators-g5vzb" Jan 26 11:53:02 crc kubenswrapper[4619]: I0126 11:53:02.367186 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66845a3a-aec4-4f8a-ac93-94f0c83a6d72-utilities\") pod \"certified-operators-g5vzb\" (UID: \"66845a3a-aec4-4f8a-ac93-94f0c83a6d72\") " pod="openshift-marketplace/certified-operators-g5vzb" Jan 26 11:53:02 crc kubenswrapper[4619]: I0126 11:53:02.367266 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fs9z\" (UniqueName: \"kubernetes.io/projected/66845a3a-aec4-4f8a-ac93-94f0c83a6d72-kube-api-access-8fs9z\") pod \"certified-operators-g5vzb\" (UID: \"66845a3a-aec4-4f8a-ac93-94f0c83a6d72\") " pod="openshift-marketplace/certified-operators-g5vzb" Jan 26 11:53:02 crc kubenswrapper[4619]: I0126 11:53:02.370488 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_eaabd9be-2386-41dc-88ef-944ee93da789/openstack-network-exporter/0.log" Jan 26 11:53:02 crc kubenswrapper[4619]: I0126 11:53:02.468651 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66845a3a-aec4-4f8a-ac93-94f0c83a6d72-catalog-content\") pod \"certified-operators-g5vzb\" (UID: \"66845a3a-aec4-4f8a-ac93-94f0c83a6d72\") " pod="openshift-marketplace/certified-operators-g5vzb" Jan 26 11:53:02 crc kubenswrapper[4619]: I0126 11:53:02.468747 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66845a3a-aec4-4f8a-ac93-94f0c83a6d72-utilities\") pod \"certified-operators-g5vzb\" (UID: \"66845a3a-aec4-4f8a-ac93-94f0c83a6d72\") " pod="openshift-marketplace/certified-operators-g5vzb" Jan 26 11:53:02 crc kubenswrapper[4619]: I0126 11:53:02.468803 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fs9z\" (UniqueName: \"kubernetes.io/projected/66845a3a-aec4-4f8a-ac93-94f0c83a6d72-kube-api-access-8fs9z\") pod \"certified-operators-g5vzb\" (UID: \"66845a3a-aec4-4f8a-ac93-94f0c83a6d72\") " pod="openshift-marketplace/certified-operators-g5vzb" Jan 26 11:53:02 crc kubenswrapper[4619]: I0126 11:53:02.469442 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66845a3a-aec4-4f8a-ac93-94f0c83a6d72-catalog-content\") pod \"certified-operators-g5vzb\" (UID: \"66845a3a-aec4-4f8a-ac93-94f0c83a6d72\") " pod="openshift-marketplace/certified-operators-g5vzb" Jan 26 11:53:02 crc kubenswrapper[4619]: I0126 11:53:02.469681 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66845a3a-aec4-4f8a-ac93-94f0c83a6d72-utilities\") pod \"certified-operators-g5vzb\" (UID: \"66845a3a-aec4-4f8a-ac93-94f0c83a6d72\") " pod="openshift-marketplace/certified-operators-g5vzb" Jan 26 11:53:02 crc kubenswrapper[4619]: I0126 11:53:02.508596 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fs9z\" (UniqueName: \"kubernetes.io/projected/66845a3a-aec4-4f8a-ac93-94f0c83a6d72-kube-api-access-8fs9z\") pod \"certified-operators-g5vzb\" (UID: \"66845a3a-aec4-4f8a-ac93-94f0c83a6d72\") " pod="openshift-marketplace/certified-operators-g5vzb" Jan 26 11:53:02 crc kubenswrapper[4619]: I0126 11:53:02.562511 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_eaabd9be-2386-41dc-88ef-944ee93da789/ovsdbserver-sb/0.log" Jan 26 11:53:02 crc kubenswrapper[4619]: I0126 11:53:02.600207 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g5vzb" Jan 26 11:53:03 crc kubenswrapper[4619]: I0126 11:53:03.126922 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-846c8954d4-fg4cj_e9d0bb40-3939-4f19-b3e8-f31e6bb0b381/placement-api/0.log" Jan 26 11:53:03 crc kubenswrapper[4619]: I0126 11:53:03.373020 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g5vzb"] Jan 26 11:53:03 crc kubenswrapper[4619]: I0126 11:53:03.455975 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-846c8954d4-fg4cj_e9d0bb40-3939-4f19-b3e8-f31e6bb0b381/placement-log/0.log" Jan 26 11:53:03 crc kubenswrapper[4619]: I0126 11:53:03.501817 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g5vzb" event={"ID":"66845a3a-aec4-4f8a-ac93-94f0c83a6d72","Type":"ContainerStarted","Data":"8897db449814d0bf9bb012cb12a64d29f18a9a5ce134e00a36b641feb9388550"} Jan 26 11:53:03 crc kubenswrapper[4619]: I0126 11:53:03.573817 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f4190b4f-7c04-4c14-83b4-87e224fef035/setup-container/0.log" Jan 26 11:53:03 crc kubenswrapper[4619]: I0126 11:53:03.976215 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f4190b4f-7c04-4c14-83b4-87e224fef035/setup-container/0.log" Jan 26 11:53:03 crc kubenswrapper[4619]: I0126 11:53:03.990082 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f4190b4f-7c04-4c14-83b4-87e224fef035/rabbitmq/0.log" Jan 26 11:53:04 crc kubenswrapper[4619]: I0126 11:53:04.091593 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85/setup-container/0.log" Jan 26 11:53:04 crc kubenswrapper[4619]: I0126 11:53:04.370538 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85/setup-container/0.log" Jan 26 11:53:04 crc kubenswrapper[4619]: I0126 11:53:04.400248 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85/rabbitmq/0.log" Jan 26 11:53:04 crc kubenswrapper[4619]: I0126 11:53:04.469909 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v_3098c4ac-7ae9-4af9-a23f-969054a718fe/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 11:53:04 crc kubenswrapper[4619]: I0126 11:53:04.510995 4619 generic.go:334] "Generic (PLEG): container finished" podID="66845a3a-aec4-4f8a-ac93-94f0c83a6d72" containerID="d8a028b796fd4b76a38cab583b29ffe770d3c36acee81557c75ba36ac9900601" exitCode=0 Jan 26 11:53:04 crc kubenswrapper[4619]: I0126 11:53:04.511034 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g5vzb" event={"ID":"66845a3a-aec4-4f8a-ac93-94f0c83a6d72","Type":"ContainerDied","Data":"d8a028b796fd4b76a38cab583b29ffe770d3c36acee81557c75ba36ac9900601"} Jan 26 11:53:04 crc kubenswrapper[4619]: I0126 11:53:04.717589 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-wjld2_99b4b151-e965-4c8b-9a4b-22b680ea1d69/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 11:53:04 crc kubenswrapper[4619]: I0126 11:53:04.745347 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr_af65cd01-ac28-4699-ae97-2fd8546a9925/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 11:53:04 crc kubenswrapper[4619]: I0126 11:53:04.950913 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-xz9vt_52d1c976-907c-4749-a9d2-e4a518578cbc/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 11:53:05 crc kubenswrapper[4619]: I0126 11:53:05.124953 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-qs6jh_7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3/ssh-known-hosts-edpm-deployment/0.log" Jan 26 11:53:05 crc kubenswrapper[4619]: I0126 11:53:05.429383 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-659c4b6587-4stqp_faba43f0-103d-43e7-9f3f-ef5be7ee8fe1/proxy-server/0.log" Jan 26 11:53:05 crc kubenswrapper[4619]: I0126 11:53:05.471399 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-659c4b6587-4stqp_faba43f0-103d-43e7-9f3f-ef5be7ee8fe1/proxy-httpd/0.log" Jan 26 11:53:05 crc kubenswrapper[4619]: I0126 11:53:05.527964 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g5vzb" event={"ID":"66845a3a-aec4-4f8a-ac93-94f0c83a6d72","Type":"ContainerStarted","Data":"f9b6a3ed285d96f10fccae54c3fe9d4f57554593f7ebea880416f0a4283dafd1"} Jan 26 11:53:05 crc kubenswrapper[4619]: I0126 11:53:05.564490 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-9swh4_ae54b20c-f51c-4b68-9f71-0748e5ba0c32/swift-ring-rebalance/0.log" Jan 26 11:53:05 crc kubenswrapper[4619]: I0126 11:53:05.791900 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e50c002e-11c3-4dc8-b32b-c962da06aecb/account-auditor/0.log" Jan 26 11:53:05 crc kubenswrapper[4619]: I0126 11:53:05.800251 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e50c002e-11c3-4dc8-b32b-c962da06aecb/account-reaper/0.log" Jan 26 11:53:05 crc kubenswrapper[4619]: I0126 11:53:05.874791 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e50c002e-11c3-4dc8-b32b-c962da06aecb/account-replicator/0.log" Jan 26 11:53:06 crc kubenswrapper[4619]: I0126 11:53:06.053597 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e50c002e-11c3-4dc8-b32b-c962da06aecb/account-server/0.log" Jan 26 11:53:06 crc kubenswrapper[4619]: I0126 11:53:06.060593 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e50c002e-11c3-4dc8-b32b-c962da06aecb/container-auditor/0.log" Jan 26 11:53:06 crc kubenswrapper[4619]: I0126 11:53:06.178022 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e50c002e-11c3-4dc8-b32b-c962da06aecb/container-server/0.log" Jan 26 11:53:06 crc kubenswrapper[4619]: I0126 11:53:06.178810 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e50c002e-11c3-4dc8-b32b-c962da06aecb/container-replicator/0.log" Jan 26 11:53:06 crc kubenswrapper[4619]: I0126 11:53:06.297249 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e50c002e-11c3-4dc8-b32b-c962da06aecb/container-updater/0.log" Jan 26 11:53:06 crc kubenswrapper[4619]: I0126 11:53:06.383487 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e50c002e-11c3-4dc8-b32b-c962da06aecb/object-auditor/0.log" Jan 26 11:53:06 crc kubenswrapper[4619]: I0126 11:53:06.432183 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e50c002e-11c3-4dc8-b32b-c962da06aecb/object-expirer/0.log" Jan 26 11:53:06 crc kubenswrapper[4619]: I0126 11:53:06.495715 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e50c002e-11c3-4dc8-b32b-c962da06aecb/object-replicator/0.log" Jan 26 11:53:06 crc kubenswrapper[4619]: I0126 11:53:06.545729 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e50c002e-11c3-4dc8-b32b-c962da06aecb/object-server/0.log" Jan 26 11:53:06 crc kubenswrapper[4619]: I0126 11:53:06.549244 4619 generic.go:334] "Generic (PLEG): container finished" podID="66845a3a-aec4-4f8a-ac93-94f0c83a6d72" containerID="f9b6a3ed285d96f10fccae54c3fe9d4f57554593f7ebea880416f0a4283dafd1" exitCode=0 Jan 26 11:53:06 crc kubenswrapper[4619]: I0126 11:53:06.549287 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g5vzb" event={"ID":"66845a3a-aec4-4f8a-ac93-94f0c83a6d72","Type":"ContainerDied","Data":"f9b6a3ed285d96f10fccae54c3fe9d4f57554593f7ebea880416f0a4283dafd1"} Jan 26 11:53:06 crc kubenswrapper[4619]: I0126 11:53:06.721258 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e50c002e-11c3-4dc8-b32b-c962da06aecb/object-updater/0.log" Jan 26 11:53:06 crc kubenswrapper[4619]: I0126 11:53:06.735846 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e50c002e-11c3-4dc8-b32b-c962da06aecb/rsync/0.log" Jan 26 11:53:06 crc kubenswrapper[4619]: I0126 11:53:06.827880 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e50c002e-11c3-4dc8-b32b-c962da06aecb/swift-recon-cron/0.log" Jan 26 11:53:07 crc kubenswrapper[4619]: I0126 11:53:07.113426 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_857314c6-b4dd-4f76-8a06-2bf24b654fe3/tempest-tests-tempest-tests-runner/0.log" Jan 26 11:53:07 crc kubenswrapper[4619]: I0126 11:53:07.130824 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq_c6a5b4c8-fd30-49e5-853a-6512124a63ca/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 11:53:07 crc kubenswrapper[4619]: I0126 11:53:07.259383 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_06dbbb97-7e72-4105-bc6c-275ca6b8c3ee/test-operator-logs-container/0.log" Jan 26 11:53:07 crc kubenswrapper[4619]: I0126 11:53:07.470400 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-qzt54_c99e3e59-22b4-4fe8-8fa6-69845f56ef45/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 11:53:07 crc kubenswrapper[4619]: I0126 11:53:07.559580 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g5vzb" event={"ID":"66845a3a-aec4-4f8a-ac93-94f0c83a6d72","Type":"ContainerStarted","Data":"e89a7d33c369f2a518dc5cc94ae18496ef76c3ec43f8dab3e267e9ed6bf5f251"} Jan 26 11:53:07 crc kubenswrapper[4619]: I0126 11:53:07.580910 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g5vzb" podStartSLOduration=2.94714243 podStartE2EDuration="5.580896797s" podCreationTimestamp="2026-01-26 11:53:02 +0000 UTC" firstStartedPulling="2026-01-26 11:53:04.512461991 +0000 UTC m=+3483.546502707" lastFinishedPulling="2026-01-26 11:53:07.146216357 +0000 UTC m=+3486.180257074" observedRunningTime="2026-01-26 11:53:07.578581014 +0000 UTC m=+3486.612621730" watchObservedRunningTime="2026-01-26 11:53:07.580896797 +0000 UTC m=+3486.614937513" Jan 26 11:53:12 crc kubenswrapper[4619]: I0126 11:53:12.601970 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g5vzb" Jan 26 11:53:12 crc kubenswrapper[4619]: I0126 11:53:12.602367 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g5vzb" Jan 26 11:53:12 crc kubenswrapper[4619]: I0126 11:53:12.676743 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g5vzb" Jan 26 11:53:13 crc kubenswrapper[4619]: I0126 11:53:13.659853 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g5vzb" Jan 26 11:53:13 crc kubenswrapper[4619]: I0126 11:53:13.725172 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g5vzb"] Jan 26 11:53:14 crc kubenswrapper[4619]: I0126 11:53:14.149751 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_8fdb3f80-9734-437b-94c1-6abcc8ce995f/memcached/0.log" Jan 26 11:53:14 crc kubenswrapper[4619]: I0126 11:53:14.234552 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:53:14 crc kubenswrapper[4619]: I0126 11:53:14.234601 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:53:14 crc kubenswrapper[4619]: I0126 11:53:14.234668 4619 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" Jan 26 11:53:14 crc kubenswrapper[4619]: I0126 11:53:14.235208 4619 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bc5d131803bc4df1a53c71187ed03cacec8fabe2fb107f3f19921da11278f5ea"} pod="openshift-machine-config-operator/machine-config-daemon-28hd4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 11:53:14 crc kubenswrapper[4619]: I0126 11:53:14.235266 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" containerID="cri-o://bc5d131803bc4df1a53c71187ed03cacec8fabe2fb107f3f19921da11278f5ea" gracePeriod=600 Jan 26 11:53:14 crc kubenswrapper[4619]: I0126 11:53:14.615454 4619 generic.go:334] "Generic (PLEG): container finished" podID="f33a41bb-6406-4c73-8024-4acd72817832" containerID="bc5d131803bc4df1a53c71187ed03cacec8fabe2fb107f3f19921da11278f5ea" exitCode=0 Jan 26 11:53:14 crc kubenswrapper[4619]: I0126 11:53:14.615530 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerDied","Data":"bc5d131803bc4df1a53c71187ed03cacec8fabe2fb107f3f19921da11278f5ea"} Jan 26 11:53:14 crc kubenswrapper[4619]: I0126 11:53:14.615821 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerStarted","Data":"66eae0a5212e47155ed1c1f31470a8210b6beb2b56a28c3fcecfeb831bb1f5d4"} Jan 26 11:53:14 crc kubenswrapper[4619]: I0126 11:53:14.615842 4619 scope.go:117] "RemoveContainer" containerID="8c519af20a51c2b3a3b9506178b6ed9ff30e4fa81c88a2fdb04ea2b8508a2f9d" Jan 26 11:53:15 crc kubenswrapper[4619]: I0126 11:53:15.628282 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g5vzb" podUID="66845a3a-aec4-4f8a-ac93-94f0c83a6d72" containerName="registry-server" containerID="cri-o://e89a7d33c369f2a518dc5cc94ae18496ef76c3ec43f8dab3e267e9ed6bf5f251" gracePeriod=2 Jan 26 11:53:16 crc kubenswrapper[4619]: I0126 11:53:16.238719 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g5vzb" Jan 26 11:53:16 crc kubenswrapper[4619]: I0126 11:53:16.390313 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fs9z\" (UniqueName: \"kubernetes.io/projected/66845a3a-aec4-4f8a-ac93-94f0c83a6d72-kube-api-access-8fs9z\") pod \"66845a3a-aec4-4f8a-ac93-94f0c83a6d72\" (UID: \"66845a3a-aec4-4f8a-ac93-94f0c83a6d72\") " Jan 26 11:53:16 crc kubenswrapper[4619]: I0126 11:53:16.390405 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66845a3a-aec4-4f8a-ac93-94f0c83a6d72-utilities\") pod \"66845a3a-aec4-4f8a-ac93-94f0c83a6d72\" (UID: \"66845a3a-aec4-4f8a-ac93-94f0c83a6d72\") " Jan 26 11:53:16 crc kubenswrapper[4619]: I0126 11:53:16.390671 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66845a3a-aec4-4f8a-ac93-94f0c83a6d72-catalog-content\") pod \"66845a3a-aec4-4f8a-ac93-94f0c83a6d72\" (UID: \"66845a3a-aec4-4f8a-ac93-94f0c83a6d72\") " Jan 26 11:53:16 crc kubenswrapper[4619]: I0126 11:53:16.397520 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66845a3a-aec4-4f8a-ac93-94f0c83a6d72-utilities" (OuterVolumeSpecName: "utilities") pod "66845a3a-aec4-4f8a-ac93-94f0c83a6d72" (UID: "66845a3a-aec4-4f8a-ac93-94f0c83a6d72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:53:16 crc kubenswrapper[4619]: I0126 11:53:16.401520 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66845a3a-aec4-4f8a-ac93-94f0c83a6d72-kube-api-access-8fs9z" (OuterVolumeSpecName: "kube-api-access-8fs9z") pod "66845a3a-aec4-4f8a-ac93-94f0c83a6d72" (UID: "66845a3a-aec4-4f8a-ac93-94f0c83a6d72"). InnerVolumeSpecName "kube-api-access-8fs9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:53:16 crc kubenswrapper[4619]: I0126 11:53:16.482427 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66845a3a-aec4-4f8a-ac93-94f0c83a6d72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66845a3a-aec4-4f8a-ac93-94f0c83a6d72" (UID: "66845a3a-aec4-4f8a-ac93-94f0c83a6d72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:53:16 crc kubenswrapper[4619]: I0126 11:53:16.492506 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66845a3a-aec4-4f8a-ac93-94f0c83a6d72-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:53:16 crc kubenswrapper[4619]: I0126 11:53:16.492543 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fs9z\" (UniqueName: \"kubernetes.io/projected/66845a3a-aec4-4f8a-ac93-94f0c83a6d72-kube-api-access-8fs9z\") on node \"crc\" DevicePath \"\"" Jan 26 11:53:16 crc kubenswrapper[4619]: I0126 11:53:16.492559 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66845a3a-aec4-4f8a-ac93-94f0c83a6d72-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:53:16 crc kubenswrapper[4619]: I0126 11:53:16.716539 4619 generic.go:334] "Generic (PLEG): container finished" podID="66845a3a-aec4-4f8a-ac93-94f0c83a6d72" containerID="e89a7d33c369f2a518dc5cc94ae18496ef76c3ec43f8dab3e267e9ed6bf5f251" exitCode=0 Jan 26 11:53:16 crc kubenswrapper[4619]: I0126 11:53:16.716835 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g5vzb" event={"ID":"66845a3a-aec4-4f8a-ac93-94f0c83a6d72","Type":"ContainerDied","Data":"e89a7d33c369f2a518dc5cc94ae18496ef76c3ec43f8dab3e267e9ed6bf5f251"} Jan 26 11:53:16 crc kubenswrapper[4619]: I0126 11:53:16.716861 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g5vzb" event={"ID":"66845a3a-aec4-4f8a-ac93-94f0c83a6d72","Type":"ContainerDied","Data":"8897db449814d0bf9bb012cb12a64d29f18a9a5ce134e00a36b641feb9388550"} Jan 26 11:53:16 crc kubenswrapper[4619]: I0126 11:53:16.716877 4619 scope.go:117] "RemoveContainer" containerID="e89a7d33c369f2a518dc5cc94ae18496ef76c3ec43f8dab3e267e9ed6bf5f251" Jan 26 11:53:16 crc kubenswrapper[4619]: I0126 11:53:16.716877 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g5vzb" Jan 26 11:53:16 crc kubenswrapper[4619]: I0126 11:53:16.788200 4619 scope.go:117] "RemoveContainer" containerID="f9b6a3ed285d96f10fccae54c3fe9d4f57554593f7ebea880416f0a4283dafd1" Jan 26 11:53:16 crc kubenswrapper[4619]: I0126 11:53:16.804292 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g5vzb"] Jan 26 11:53:16 crc kubenswrapper[4619]: I0126 11:53:16.849680 4619 scope.go:117] "RemoveContainer" containerID="d8a028b796fd4b76a38cab583b29ffe770d3c36acee81557c75ba36ac9900601" Jan 26 11:53:16 crc kubenswrapper[4619]: I0126 11:53:16.861518 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g5vzb"] Jan 26 11:53:16 crc kubenswrapper[4619]: I0126 11:53:16.913644 4619 scope.go:117] "RemoveContainer" containerID="e89a7d33c369f2a518dc5cc94ae18496ef76c3ec43f8dab3e267e9ed6bf5f251" Jan 26 11:53:16 crc kubenswrapper[4619]: E0126 11:53:16.914131 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e89a7d33c369f2a518dc5cc94ae18496ef76c3ec43f8dab3e267e9ed6bf5f251\": container with ID starting with e89a7d33c369f2a518dc5cc94ae18496ef76c3ec43f8dab3e267e9ed6bf5f251 not found: ID does not exist" containerID="e89a7d33c369f2a518dc5cc94ae18496ef76c3ec43f8dab3e267e9ed6bf5f251" Jan 26 11:53:16 crc kubenswrapper[4619]: I0126 11:53:16.914186 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e89a7d33c369f2a518dc5cc94ae18496ef76c3ec43f8dab3e267e9ed6bf5f251"} err="failed to get container status \"e89a7d33c369f2a518dc5cc94ae18496ef76c3ec43f8dab3e267e9ed6bf5f251\": rpc error: code = NotFound desc = could not find container \"e89a7d33c369f2a518dc5cc94ae18496ef76c3ec43f8dab3e267e9ed6bf5f251\": container with ID starting with e89a7d33c369f2a518dc5cc94ae18496ef76c3ec43f8dab3e267e9ed6bf5f251 not found: ID does not exist" Jan 26 11:53:16 crc kubenswrapper[4619]: I0126 11:53:16.914219 4619 scope.go:117] "RemoveContainer" containerID="f9b6a3ed285d96f10fccae54c3fe9d4f57554593f7ebea880416f0a4283dafd1" Jan 26 11:53:16 crc kubenswrapper[4619]: E0126 11:53:16.914535 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9b6a3ed285d96f10fccae54c3fe9d4f57554593f7ebea880416f0a4283dafd1\": container with ID starting with f9b6a3ed285d96f10fccae54c3fe9d4f57554593f7ebea880416f0a4283dafd1 not found: ID does not exist" containerID="f9b6a3ed285d96f10fccae54c3fe9d4f57554593f7ebea880416f0a4283dafd1" Jan 26 11:53:16 crc kubenswrapper[4619]: I0126 11:53:16.914565 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9b6a3ed285d96f10fccae54c3fe9d4f57554593f7ebea880416f0a4283dafd1"} err="failed to get container status \"f9b6a3ed285d96f10fccae54c3fe9d4f57554593f7ebea880416f0a4283dafd1\": rpc error: code = NotFound desc = could not find container \"f9b6a3ed285d96f10fccae54c3fe9d4f57554593f7ebea880416f0a4283dafd1\": container with ID starting with f9b6a3ed285d96f10fccae54c3fe9d4f57554593f7ebea880416f0a4283dafd1 not found: ID does not exist" Jan 26 11:53:16 crc kubenswrapper[4619]: I0126 11:53:16.914582 4619 scope.go:117] "RemoveContainer" containerID="d8a028b796fd4b76a38cab583b29ffe770d3c36acee81557c75ba36ac9900601" Jan 26 11:53:16 crc kubenswrapper[4619]: E0126 11:53:16.914921 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8a028b796fd4b76a38cab583b29ffe770d3c36acee81557c75ba36ac9900601\": container with ID starting with d8a028b796fd4b76a38cab583b29ffe770d3c36acee81557c75ba36ac9900601 not found: ID does not exist" containerID="d8a028b796fd4b76a38cab583b29ffe770d3c36acee81557c75ba36ac9900601" Jan 26 11:53:16 crc kubenswrapper[4619]: I0126 11:53:16.914958 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8a028b796fd4b76a38cab583b29ffe770d3c36acee81557c75ba36ac9900601"} err="failed to get container status \"d8a028b796fd4b76a38cab583b29ffe770d3c36acee81557c75ba36ac9900601\": rpc error: code = NotFound desc = could not find container \"d8a028b796fd4b76a38cab583b29ffe770d3c36acee81557c75ba36ac9900601\": container with ID starting with d8a028b796fd4b76a38cab583b29ffe770d3c36acee81557c75ba36ac9900601 not found: ID does not exist" Jan 26 11:53:16 crc kubenswrapper[4619]: E0126 11:53:16.939155 4619 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66845a3a_aec4_4f8a_ac93_94f0c83a6d72.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66845a3a_aec4_4f8a_ac93_94f0c83a6d72.slice/crio-8897db449814d0bf9bb012cb12a64d29f18a9a5ce134e00a36b641feb9388550\": RecentStats: unable to find data in memory cache]" Jan 26 11:53:17 crc kubenswrapper[4619]: I0126 11:53:17.271501 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66845a3a-aec4-4f8a-ac93-94f0c83a6d72" path="/var/lib/kubelet/pods/66845a3a-aec4-4f8a-ac93-94f0c83a6d72/volumes" Jan 26 11:53:32 crc kubenswrapper[4619]: I0126 11:53:32.257016 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q7dx4"] Jan 26 11:53:32 crc kubenswrapper[4619]: E0126 11:53:32.258998 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66845a3a-aec4-4f8a-ac93-94f0c83a6d72" containerName="extract-utilities" Jan 26 11:53:32 crc kubenswrapper[4619]: I0126 11:53:32.259011 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="66845a3a-aec4-4f8a-ac93-94f0c83a6d72" containerName="extract-utilities" Jan 26 11:53:32 crc kubenswrapper[4619]: E0126 11:53:32.259027 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66845a3a-aec4-4f8a-ac93-94f0c83a6d72" containerName="registry-server" Jan 26 11:53:32 crc kubenswrapper[4619]: I0126 11:53:32.259033 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="66845a3a-aec4-4f8a-ac93-94f0c83a6d72" containerName="registry-server" Jan 26 11:53:32 crc kubenswrapper[4619]: E0126 11:53:32.259045 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66845a3a-aec4-4f8a-ac93-94f0c83a6d72" containerName="extract-content" Jan 26 11:53:32 crc kubenswrapper[4619]: I0126 11:53:32.259051 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="66845a3a-aec4-4f8a-ac93-94f0c83a6d72" containerName="extract-content" Jan 26 11:53:32 crc kubenswrapper[4619]: I0126 11:53:32.259230 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="66845a3a-aec4-4f8a-ac93-94f0c83a6d72" containerName="registry-server" Jan 26 11:53:32 crc kubenswrapper[4619]: I0126 11:53:32.262512 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7dx4" Jan 26 11:53:32 crc kubenswrapper[4619]: I0126 11:53:32.324086 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q7dx4"] Jan 26 11:53:32 crc kubenswrapper[4619]: I0126 11:53:32.397427 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtfxb\" (UniqueName: \"kubernetes.io/projected/ea3e1388-0660-42cb-992b-32de4e5d52d5-kube-api-access-gtfxb\") pod \"redhat-operators-q7dx4\" (UID: \"ea3e1388-0660-42cb-992b-32de4e5d52d5\") " pod="openshift-marketplace/redhat-operators-q7dx4" Jan 26 11:53:32 crc kubenswrapper[4619]: I0126 11:53:32.397519 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea3e1388-0660-42cb-992b-32de4e5d52d5-catalog-content\") pod \"redhat-operators-q7dx4\" (UID: \"ea3e1388-0660-42cb-992b-32de4e5d52d5\") " pod="openshift-marketplace/redhat-operators-q7dx4" Jan 26 11:53:32 crc kubenswrapper[4619]: I0126 11:53:32.397555 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea3e1388-0660-42cb-992b-32de4e5d52d5-utilities\") pod \"redhat-operators-q7dx4\" (UID: \"ea3e1388-0660-42cb-992b-32de4e5d52d5\") " pod="openshift-marketplace/redhat-operators-q7dx4" Jan 26 11:53:32 crc kubenswrapper[4619]: I0126 11:53:32.499293 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtfxb\" (UniqueName: \"kubernetes.io/projected/ea3e1388-0660-42cb-992b-32de4e5d52d5-kube-api-access-gtfxb\") pod \"redhat-operators-q7dx4\" (UID: \"ea3e1388-0660-42cb-992b-32de4e5d52d5\") " pod="openshift-marketplace/redhat-operators-q7dx4" Jan 26 11:53:32 crc kubenswrapper[4619]: I0126 11:53:32.499366 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea3e1388-0660-42cb-992b-32de4e5d52d5-catalog-content\") pod \"redhat-operators-q7dx4\" (UID: \"ea3e1388-0660-42cb-992b-32de4e5d52d5\") " pod="openshift-marketplace/redhat-operators-q7dx4" Jan 26 11:53:32 crc kubenswrapper[4619]: I0126 11:53:32.499397 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea3e1388-0660-42cb-992b-32de4e5d52d5-utilities\") pod \"redhat-operators-q7dx4\" (UID: \"ea3e1388-0660-42cb-992b-32de4e5d52d5\") " pod="openshift-marketplace/redhat-operators-q7dx4" Jan 26 11:53:32 crc kubenswrapper[4619]: I0126 11:53:32.499881 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea3e1388-0660-42cb-992b-32de4e5d52d5-catalog-content\") pod \"redhat-operators-q7dx4\" (UID: \"ea3e1388-0660-42cb-992b-32de4e5d52d5\") " pod="openshift-marketplace/redhat-operators-q7dx4" Jan 26 11:53:32 crc kubenswrapper[4619]: I0126 11:53:32.499919 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea3e1388-0660-42cb-992b-32de4e5d52d5-utilities\") pod \"redhat-operators-q7dx4\" (UID: \"ea3e1388-0660-42cb-992b-32de4e5d52d5\") " pod="openshift-marketplace/redhat-operators-q7dx4" Jan 26 11:53:32 crc kubenswrapper[4619]: I0126 11:53:32.531504 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtfxb\" (UniqueName: \"kubernetes.io/projected/ea3e1388-0660-42cb-992b-32de4e5d52d5-kube-api-access-gtfxb\") pod \"redhat-operators-q7dx4\" (UID: \"ea3e1388-0660-42cb-992b-32de4e5d52d5\") " pod="openshift-marketplace/redhat-operators-q7dx4" Jan 26 11:53:32 crc kubenswrapper[4619]: I0126 11:53:32.619703 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7dx4" Jan 26 11:53:32 crc kubenswrapper[4619]: I0126 11:53:32.931452 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q7dx4"] Jan 26 11:53:32 crc kubenswrapper[4619]: W0126 11:53:32.956953 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea3e1388_0660_42cb_992b_32de4e5d52d5.slice/crio-408bc25b6808101c4b929cac6c47385838aefbb3488967dc6ebd33fa1f0847a9 WatchSource:0}: Error finding container 408bc25b6808101c4b929cac6c47385838aefbb3488967dc6ebd33fa1f0847a9: Status 404 returned error can't find the container with id 408bc25b6808101c4b929cac6c47385838aefbb3488967dc6ebd33fa1f0847a9 Jan 26 11:53:33 crc kubenswrapper[4619]: I0126 11:53:33.862136 4619 generic.go:334] "Generic (PLEG): container finished" podID="ea3e1388-0660-42cb-992b-32de4e5d52d5" containerID="41a99f652782bc9d4165b5ce64c9b79106eff87b4d2ec4f401b6aa8f39a90491" exitCode=0 Jan 26 11:53:33 crc kubenswrapper[4619]: I0126 11:53:33.862223 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7dx4" event={"ID":"ea3e1388-0660-42cb-992b-32de4e5d52d5","Type":"ContainerDied","Data":"41a99f652782bc9d4165b5ce64c9b79106eff87b4d2ec4f401b6aa8f39a90491"} Jan 26 11:53:33 crc kubenswrapper[4619]: I0126 11:53:33.862430 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7dx4" event={"ID":"ea3e1388-0660-42cb-992b-32de4e5d52d5","Type":"ContainerStarted","Data":"408bc25b6808101c4b929cac6c47385838aefbb3488967dc6ebd33fa1f0847a9"} Jan 26 11:53:34 crc kubenswrapper[4619]: I0126 11:53:34.873768 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7dx4" event={"ID":"ea3e1388-0660-42cb-992b-32de4e5d52d5","Type":"ContainerStarted","Data":"e1a40c0b46a9daeb68391200d5aa780ec6e687a3026c30b8c17999f12ad9e1a2"} Jan 26 11:53:38 crc kubenswrapper[4619]: I0126 11:53:38.910594 4619 generic.go:334] "Generic (PLEG): container finished" podID="ea3e1388-0660-42cb-992b-32de4e5d52d5" containerID="e1a40c0b46a9daeb68391200d5aa780ec6e687a3026c30b8c17999f12ad9e1a2" exitCode=0 Jan 26 11:53:38 crc kubenswrapper[4619]: I0126 11:53:38.910655 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7dx4" event={"ID":"ea3e1388-0660-42cb-992b-32de4e5d52d5","Type":"ContainerDied","Data":"e1a40c0b46a9daeb68391200d5aa780ec6e687a3026c30b8c17999f12ad9e1a2"} Jan 26 11:53:39 crc kubenswrapper[4619]: I0126 11:53:39.652212 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck_70360edb-1325-42c3-9ffd-05d030d21375/util/0.log" Jan 26 11:53:39 crc kubenswrapper[4619]: I0126 11:53:39.922485 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7dx4" event={"ID":"ea3e1388-0660-42cb-992b-32de4e5d52d5","Type":"ContainerStarted","Data":"4cd86e458a5ea6e05c22cde3ac81ce3d1b7f7620614b6d4f77e654704c7659dd"} Jan 26 11:53:39 crc kubenswrapper[4619]: I0126 11:53:39.980118 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q7dx4" podStartSLOduration=2.50313506 podStartE2EDuration="7.980099655s" podCreationTimestamp="2026-01-26 11:53:32 +0000 UTC" firstStartedPulling="2026-01-26 11:53:33.863603582 +0000 UTC m=+3512.897644298" lastFinishedPulling="2026-01-26 11:53:39.340568177 +0000 UTC m=+3518.374608893" observedRunningTime="2026-01-26 11:53:39.975242133 +0000 UTC m=+3519.009282849" watchObservedRunningTime="2026-01-26 11:53:39.980099655 +0000 UTC m=+3519.014140371" Jan 26 11:53:40 crc kubenswrapper[4619]: I0126 11:53:40.215889 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck_70360edb-1325-42c3-9ffd-05d030d21375/util/0.log" Jan 26 11:53:40 crc kubenswrapper[4619]: I0126 11:53:40.305143 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck_70360edb-1325-42c3-9ffd-05d030d21375/pull/0.log" Jan 26 11:53:40 crc kubenswrapper[4619]: I0126 11:53:40.332199 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck_70360edb-1325-42c3-9ffd-05d030d21375/pull/0.log" Jan 26 11:53:40 crc kubenswrapper[4619]: I0126 11:53:40.538428 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck_70360edb-1325-42c3-9ffd-05d030d21375/pull/0.log" Jan 26 11:53:40 crc kubenswrapper[4619]: I0126 11:53:40.545200 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck_70360edb-1325-42c3-9ffd-05d030d21375/util/0.log" Jan 26 11:53:40 crc kubenswrapper[4619]: I0126 11:53:40.670571 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck_70360edb-1325-42c3-9ffd-05d030d21375/extract/0.log" Jan 26 11:53:40 crc kubenswrapper[4619]: I0126 11:53:40.979261 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-5d6449f6dc-sd74p_78e0a81b-7050-4a6b-8f89-b1f02cf2bed4/manager/0.log" Jan 26 11:53:41 crc kubenswrapper[4619]: I0126 11:53:41.037568 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-6w9xz_0236e799-d5fb-4edf-b0cf-b40093e13c9f/manager/0.log" Jan 26 11:53:41 crc kubenswrapper[4619]: I0126 11:53:41.187008 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-qvfcm_c4c33d5c-a111-42bd-932d-7b60aaa798be/manager/0.log" Jan 26 11:53:41 crc kubenswrapper[4619]: I0126 11:53:41.352131 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-x95m2_0821bfee-e661-4cb0-9079-70ee60bdec02/manager/0.log" Jan 26 11:53:41 crc kubenswrapper[4619]: I0126 11:53:41.571184 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-2wgql_67d01b92-a260-4a23-a395-1e2c5079dbed/manager/0.log" Jan 26 11:53:41 crc kubenswrapper[4619]: I0126 11:53:41.744142 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-g6t9k_3ada408d-b7d5-4d35-b779-65be4855e174/manager/0.log" Jan 26 11:53:42 crc kubenswrapper[4619]: I0126 11:53:42.080660 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-758868c854-h44rl_817a0b42-6961-46cf-b353-38aee1dab88c/manager/0.log" Jan 26 11:53:42 crc kubenswrapper[4619]: I0126 11:53:42.113672 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-59hn2_d75eb578-095c-4ad4-b85d-c78417306fb0/manager/0.log" Jan 26 11:53:42 crc kubenswrapper[4619]: I0126 11:53:42.313759 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-pwdwf_9bd38ee3-e401-40e3-8fdc-73722e175d2f/manager/0.log" Jan 26 11:53:42 crc kubenswrapper[4619]: I0126 11:53:42.376302 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-ltc6c_097a933b-c278-4367-881a-bbd0942d69b3/manager/0.log" Jan 26 11:53:42 crc kubenswrapper[4619]: I0126 11:53:42.622694 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q7dx4" Jan 26 11:53:42 crc kubenswrapper[4619]: I0126 11:53:42.622894 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q7dx4" Jan 26 11:53:42 crc kubenswrapper[4619]: I0126 11:53:42.623228 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-2vtpj_146ce69f-077f-483b-a7f6-d32bb6e2ad05/manager/0.log" Jan 26 11:53:42 crc kubenswrapper[4619]: I0126 11:53:42.745410 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-c8mj6_3edab216-d77f-4b95-b98b-0ed86e9b2305/manager/0.log" Jan 26 11:53:43 crc kubenswrapper[4619]: I0126 11:53:43.017218 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-fzrxh_d991f0cd-a82d-443e-b399-ab59ac238b0b/manager/0.log" Jan 26 11:53:43 crc kubenswrapper[4619]: I0126 11:53:43.128454 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-vvxv5_8d9312b1-e850-4099-b5a4-60c113f009a3/manager/0.log" Jan 26 11:53:43 crc kubenswrapper[4619]: I0126 11:53:43.290533 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854nj84s_366e3862-4a5d-447e-890e-1a1ed1d7bf5f/manager/0.log" Jan 26 11:53:43 crc kubenswrapper[4619]: I0126 11:53:43.526166 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-b888df747-blvm9_dd1e6c3c-64b1-4ced-9371-c2368efd4620/operator/0.log" Jan 26 11:53:43 crc kubenswrapper[4619]: I0126 11:53:43.716197 4619 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q7dx4" podUID="ea3e1388-0660-42cb-992b-32de4e5d52d5" containerName="registry-server" probeResult="failure" output=< Jan 26 11:53:43 crc kubenswrapper[4619]: timeout: failed to connect service ":50051" within 1s Jan 26 11:53:43 crc kubenswrapper[4619]: > Jan 26 11:53:43 crc kubenswrapper[4619]: I0126 11:53:43.979976 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-xm4c9_c89269ce-7325-4368-8653-48d35a50ee0b/registry-server/0.log" Jan 26 11:53:44 crc kubenswrapper[4619]: I0126 11:53:44.558579 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-77bff5b64d-pzhsq_3b4348c7-3d25-4d2b-837e-5add3c85cd30/manager/0.log" Jan 26 11:53:44 crc kubenswrapper[4619]: I0126 11:53:44.655939 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-rpdwn_7ca801e8-77b6-4ea2-8bd7-4aec3c0e3c7a/manager/0.log" Jan 26 11:53:44 crc kubenswrapper[4619]: I0126 11:53:44.687218 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-fdtcd_a6eb6ada-8607-4687-a235-e8c5f581e4b4/manager/0.log" Jan 26 11:53:44 crc kubenswrapper[4619]: I0126 11:53:44.943434 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-j8r8q_9f67fdeb-3415-4da5-a78e-66f6afad477f/operator/0.log" Jan 26 11:53:45 crc kubenswrapper[4619]: I0126 11:53:45.090956 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-jcstk_ad531f03-5ce8-475e-923a-15a9561e79d0/manager/0.log" Jan 26 11:53:45 crc kubenswrapper[4619]: I0126 11:53:45.374912 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-z44mm_861bc5f6-bbf8-4626-aed7-a015389630d2/manager/0.log" Jan 26 11:53:45 crc kubenswrapper[4619]: I0126 11:53:45.436092 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-gpm8h_f2d78077-e281-4b95-a576-892bf5eaea8d/manager/0.log" Jan 26 11:53:45 crc kubenswrapper[4619]: I0126 11:53:45.530816 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-r479p_1200fb20-58ac-4e2b-aa47-d8e3bb34578b/manager/0.log" Jan 26 11:53:52 crc kubenswrapper[4619]: I0126 11:53:52.679549 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q7dx4" Jan 26 11:53:52 crc kubenswrapper[4619]: I0126 11:53:52.743456 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q7dx4" Jan 26 11:53:52 crc kubenswrapper[4619]: I0126 11:53:52.915565 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q7dx4"] Jan 26 11:53:54 crc kubenswrapper[4619]: I0126 11:53:54.045807 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q7dx4" podUID="ea3e1388-0660-42cb-992b-32de4e5d52d5" containerName="registry-server" containerID="cri-o://4cd86e458a5ea6e05c22cde3ac81ce3d1b7f7620614b6d4f77e654704c7659dd" gracePeriod=2 Jan 26 11:53:54 crc kubenswrapper[4619]: I0126 11:53:54.720070 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7dx4" Jan 26 11:53:54 crc kubenswrapper[4619]: I0126 11:53:54.830864 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtfxb\" (UniqueName: \"kubernetes.io/projected/ea3e1388-0660-42cb-992b-32de4e5d52d5-kube-api-access-gtfxb\") pod \"ea3e1388-0660-42cb-992b-32de4e5d52d5\" (UID: \"ea3e1388-0660-42cb-992b-32de4e5d52d5\") " Jan 26 11:53:54 crc kubenswrapper[4619]: I0126 11:53:54.831057 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea3e1388-0660-42cb-992b-32de4e5d52d5-utilities\") pod \"ea3e1388-0660-42cb-992b-32de4e5d52d5\" (UID: \"ea3e1388-0660-42cb-992b-32de4e5d52d5\") " Jan 26 11:53:54 crc kubenswrapper[4619]: I0126 11:53:54.831144 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea3e1388-0660-42cb-992b-32de4e5d52d5-catalog-content\") pod \"ea3e1388-0660-42cb-992b-32de4e5d52d5\" (UID: \"ea3e1388-0660-42cb-992b-32de4e5d52d5\") " Jan 26 11:53:54 crc kubenswrapper[4619]: I0126 11:53:54.831619 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea3e1388-0660-42cb-992b-32de4e5d52d5-utilities" (OuterVolumeSpecName: "utilities") pod "ea3e1388-0660-42cb-992b-32de4e5d52d5" (UID: "ea3e1388-0660-42cb-992b-32de4e5d52d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:53:54 crc kubenswrapper[4619]: I0126 11:53:54.836786 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea3e1388-0660-42cb-992b-32de4e5d52d5-kube-api-access-gtfxb" (OuterVolumeSpecName: "kube-api-access-gtfxb") pod "ea3e1388-0660-42cb-992b-32de4e5d52d5" (UID: "ea3e1388-0660-42cb-992b-32de4e5d52d5"). InnerVolumeSpecName "kube-api-access-gtfxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:53:54 crc kubenswrapper[4619]: I0126 11:53:54.933099 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtfxb\" (UniqueName: \"kubernetes.io/projected/ea3e1388-0660-42cb-992b-32de4e5d52d5-kube-api-access-gtfxb\") on node \"crc\" DevicePath \"\"" Jan 26 11:53:54 crc kubenswrapper[4619]: I0126 11:53:54.933161 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea3e1388-0660-42cb-992b-32de4e5d52d5-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:53:54 crc kubenswrapper[4619]: I0126 11:53:54.957074 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea3e1388-0660-42cb-992b-32de4e5d52d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea3e1388-0660-42cb-992b-32de4e5d52d5" (UID: "ea3e1388-0660-42cb-992b-32de4e5d52d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:53:55 crc kubenswrapper[4619]: I0126 11:53:55.035241 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea3e1388-0660-42cb-992b-32de4e5d52d5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:53:55 crc kubenswrapper[4619]: I0126 11:53:55.055212 4619 generic.go:334] "Generic (PLEG): container finished" podID="ea3e1388-0660-42cb-992b-32de4e5d52d5" containerID="4cd86e458a5ea6e05c22cde3ac81ce3d1b7f7620614b6d4f77e654704c7659dd" exitCode=0 Jan 26 11:53:55 crc kubenswrapper[4619]: I0126 11:53:55.055251 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7dx4" Jan 26 11:53:55 crc kubenswrapper[4619]: I0126 11:53:55.055255 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7dx4" event={"ID":"ea3e1388-0660-42cb-992b-32de4e5d52d5","Type":"ContainerDied","Data":"4cd86e458a5ea6e05c22cde3ac81ce3d1b7f7620614b6d4f77e654704c7659dd"} Jan 26 11:53:55 crc kubenswrapper[4619]: I0126 11:53:55.055278 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7dx4" event={"ID":"ea3e1388-0660-42cb-992b-32de4e5d52d5","Type":"ContainerDied","Data":"408bc25b6808101c4b929cac6c47385838aefbb3488967dc6ebd33fa1f0847a9"} Jan 26 11:53:55 crc kubenswrapper[4619]: I0126 11:53:55.055297 4619 scope.go:117] "RemoveContainer" containerID="4cd86e458a5ea6e05c22cde3ac81ce3d1b7f7620614b6d4f77e654704c7659dd" Jan 26 11:53:55 crc kubenswrapper[4619]: I0126 11:53:55.076177 4619 scope.go:117] "RemoveContainer" containerID="e1a40c0b46a9daeb68391200d5aa780ec6e687a3026c30b8c17999f12ad9e1a2" Jan 26 11:53:55 crc kubenswrapper[4619]: I0126 11:53:55.090579 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q7dx4"] Jan 26 11:53:55 crc kubenswrapper[4619]: I0126 11:53:55.100371 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q7dx4"] Jan 26 11:53:55 crc kubenswrapper[4619]: I0126 11:53:55.144651 4619 scope.go:117] "RemoveContainer" containerID="41a99f652782bc9d4165b5ce64c9b79106eff87b4d2ec4f401b6aa8f39a90491" Jan 26 11:53:55 crc kubenswrapper[4619]: I0126 11:53:55.194639 4619 scope.go:117] "RemoveContainer" containerID="4cd86e458a5ea6e05c22cde3ac81ce3d1b7f7620614b6d4f77e654704c7659dd" Jan 26 11:53:55 crc kubenswrapper[4619]: E0126 11:53:55.195087 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cd86e458a5ea6e05c22cde3ac81ce3d1b7f7620614b6d4f77e654704c7659dd\": container with ID starting with 4cd86e458a5ea6e05c22cde3ac81ce3d1b7f7620614b6d4f77e654704c7659dd not found: ID does not exist" containerID="4cd86e458a5ea6e05c22cde3ac81ce3d1b7f7620614b6d4f77e654704c7659dd" Jan 26 11:53:55 crc kubenswrapper[4619]: I0126 11:53:55.195112 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cd86e458a5ea6e05c22cde3ac81ce3d1b7f7620614b6d4f77e654704c7659dd"} err="failed to get container status \"4cd86e458a5ea6e05c22cde3ac81ce3d1b7f7620614b6d4f77e654704c7659dd\": rpc error: code = NotFound desc = could not find container \"4cd86e458a5ea6e05c22cde3ac81ce3d1b7f7620614b6d4f77e654704c7659dd\": container with ID starting with 4cd86e458a5ea6e05c22cde3ac81ce3d1b7f7620614b6d4f77e654704c7659dd not found: ID does not exist" Jan 26 11:53:55 crc kubenswrapper[4619]: I0126 11:53:55.195134 4619 scope.go:117] "RemoveContainer" containerID="e1a40c0b46a9daeb68391200d5aa780ec6e687a3026c30b8c17999f12ad9e1a2" Jan 26 11:53:55 crc kubenswrapper[4619]: E0126 11:53:55.195397 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1a40c0b46a9daeb68391200d5aa780ec6e687a3026c30b8c17999f12ad9e1a2\": container with ID starting with e1a40c0b46a9daeb68391200d5aa780ec6e687a3026c30b8c17999f12ad9e1a2 not found: ID does not exist" containerID="e1a40c0b46a9daeb68391200d5aa780ec6e687a3026c30b8c17999f12ad9e1a2" Jan 26 11:53:55 crc kubenswrapper[4619]: I0126 11:53:55.195425 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1a40c0b46a9daeb68391200d5aa780ec6e687a3026c30b8c17999f12ad9e1a2"} err="failed to get container status \"e1a40c0b46a9daeb68391200d5aa780ec6e687a3026c30b8c17999f12ad9e1a2\": rpc error: code = NotFound desc = could not find container \"e1a40c0b46a9daeb68391200d5aa780ec6e687a3026c30b8c17999f12ad9e1a2\": container with ID starting with e1a40c0b46a9daeb68391200d5aa780ec6e687a3026c30b8c17999f12ad9e1a2 not found: ID does not exist" Jan 26 11:53:55 crc kubenswrapper[4619]: I0126 11:53:55.195447 4619 scope.go:117] "RemoveContainer" containerID="41a99f652782bc9d4165b5ce64c9b79106eff87b4d2ec4f401b6aa8f39a90491" Jan 26 11:53:55 crc kubenswrapper[4619]: E0126 11:53:55.195801 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41a99f652782bc9d4165b5ce64c9b79106eff87b4d2ec4f401b6aa8f39a90491\": container with ID starting with 41a99f652782bc9d4165b5ce64c9b79106eff87b4d2ec4f401b6aa8f39a90491 not found: ID does not exist" containerID="41a99f652782bc9d4165b5ce64c9b79106eff87b4d2ec4f401b6aa8f39a90491" Jan 26 11:53:55 crc kubenswrapper[4619]: I0126 11:53:55.195837 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41a99f652782bc9d4165b5ce64c9b79106eff87b4d2ec4f401b6aa8f39a90491"} err="failed to get container status \"41a99f652782bc9d4165b5ce64c9b79106eff87b4d2ec4f401b6aa8f39a90491\": rpc error: code = NotFound desc = could not find container \"41a99f652782bc9d4165b5ce64c9b79106eff87b4d2ec4f401b6aa8f39a90491\": container with ID starting with 41a99f652782bc9d4165b5ce64c9b79106eff87b4d2ec4f401b6aa8f39a90491 not found: ID does not exist" Jan 26 11:53:55 crc kubenswrapper[4619]: I0126 11:53:55.275902 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea3e1388-0660-42cb-992b-32de4e5d52d5" path="/var/lib/kubelet/pods/ea3e1388-0660-42cb-992b-32de4e5d52d5/volumes" Jan 26 11:54:08 crc kubenswrapper[4619]: I0126 11:54:08.324400 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-l54p9_696254c7-95ab-43d9-9919-5d1146eec08e/control-plane-machine-set-operator/0.log" Jan 26 11:54:08 crc kubenswrapper[4619]: I0126 11:54:08.449422 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hk46v_0bff9a47-4685-457d-8a24-6139113cdbd8/kube-rbac-proxy/0.log" Jan 26 11:54:08 crc kubenswrapper[4619]: I0126 11:54:08.563025 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hk46v_0bff9a47-4685-457d-8a24-6139113cdbd8/machine-api-operator/0.log" Jan 26 11:54:20 crc kubenswrapper[4619]: I0126 11:54:20.641315 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-pzvld_b4ddb0da-8d36-41cf-a6f1-f02a48086888/cert-manager-controller/0.log" Jan 26 11:54:20 crc kubenswrapper[4619]: I0126 11:54:20.757943 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-nnd7f_cb296f0e-c4e5-4b2b-82de-49af144cbf77/cert-manager-cainjector/0.log" Jan 26 11:54:20 crc kubenswrapper[4619]: I0126 11:54:20.867656 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-46rtj_e256cee0-a4f8-46ca-bad9-4abc6bf31216/cert-manager-webhook/0.log" Jan 26 11:54:33 crc kubenswrapper[4619]: I0126 11:54:33.447131 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-f7qjj_9ffa32f9-021a-405b-920e-5fb684f8d8e4/nmstate-console-plugin/0.log" Jan 26 11:54:33 crc kubenswrapper[4619]: I0126 11:54:33.610555 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-9k8lb_b8830012-0a44-4256-b546-b00b81d136cf/nmstate-handler/0.log" Jan 26 11:54:33 crc kubenswrapper[4619]: I0126 11:54:33.685806 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-nh758_b2d70641-12a7-4923-8fa0-f09a91915630/nmstate-metrics/0.log" Jan 26 11:54:33 crc kubenswrapper[4619]: I0126 11:54:33.696416 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-nh758_b2d70641-12a7-4923-8fa0-f09a91915630/kube-rbac-proxy/0.log" Jan 26 11:54:33 crc kubenswrapper[4619]: I0126 11:54:33.892910 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-cnsxv_a05d614f-24d2-4005-9110-1a002d0670ae/nmstate-operator/0.log" Jan 26 11:54:33 crc kubenswrapper[4619]: I0126 11:54:33.898819 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-56j7m_b2e020f6-4ac4-407d-9eb9-96f1072d01ab/nmstate-webhook/0.log" Jan 26 11:55:05 crc kubenswrapper[4619]: I0126 11:55:05.312062 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-nz7bv_72d807ee-cc40-44fc-b153-c36c4bb75332/kube-rbac-proxy/0.log" Jan 26 11:55:05 crc kubenswrapper[4619]: I0126 11:55:05.493899 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-nz7bv_72d807ee-cc40-44fc-b153-c36c4bb75332/controller/0.log" Jan 26 11:55:05 crc kubenswrapper[4619]: I0126 11:55:05.565275 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/cp-frr-files/0.log" Jan 26 11:55:05 crc kubenswrapper[4619]: I0126 11:55:05.814919 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/cp-reloader/0.log" Jan 26 11:55:05 crc kubenswrapper[4619]: I0126 11:55:05.820720 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/cp-frr-files/0.log" Jan 26 11:55:05 crc kubenswrapper[4619]: I0126 11:55:05.829679 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/cp-reloader/0.log" Jan 26 11:55:05 crc kubenswrapper[4619]: I0126 11:55:05.873206 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/cp-metrics/0.log" Jan 26 11:55:06 crc kubenswrapper[4619]: I0126 11:55:06.031377 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/cp-reloader/0.log" Jan 26 11:55:06 crc kubenswrapper[4619]: I0126 11:55:06.039399 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/cp-frr-files/0.log" Jan 26 11:55:06 crc kubenswrapper[4619]: I0126 11:55:06.088016 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/cp-metrics/0.log" Jan 26 11:55:06 crc kubenswrapper[4619]: I0126 11:55:06.113971 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/cp-metrics/0.log" Jan 26 11:55:06 crc kubenswrapper[4619]: I0126 11:55:06.295159 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/cp-frr-files/0.log" Jan 26 11:55:06 crc kubenswrapper[4619]: I0126 11:55:06.301724 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/cp-metrics/0.log" Jan 26 11:55:06 crc kubenswrapper[4619]: I0126 11:55:06.335440 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/cp-reloader/0.log" Jan 26 11:55:06 crc kubenswrapper[4619]: I0126 11:55:06.352128 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/controller/0.log" Jan 26 11:55:06 crc kubenswrapper[4619]: I0126 11:55:06.510156 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/frr-metrics/0.log" Jan 26 11:55:06 crc kubenswrapper[4619]: I0126 11:55:06.588335 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/kube-rbac-proxy-frr/0.log" Jan 26 11:55:06 crc kubenswrapper[4619]: I0126 11:55:06.599897 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/kube-rbac-proxy/0.log" Jan 26 11:55:06 crc kubenswrapper[4619]: I0126 11:55:06.820680 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/reloader/0.log" Jan 26 11:55:06 crc kubenswrapper[4619]: I0126 11:55:06.944932 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-jwd5k_c511ad3e-52ab-4e39-bbed-f795da1b29e8/frr-k8s-webhook-server/0.log" Jan 26 11:55:07 crc kubenswrapper[4619]: I0126 11:55:07.216131 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-f5d78644f-q9qtr_4c81b5cf-0f17-4d7b-bfd8-ee67be620339/manager/0.log" Jan 26 11:55:07 crc kubenswrapper[4619]: I0126 11:55:07.349714 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7575bfd756-qfvd5_50eeef8d-b0ff-4b67-86ef-68febf4bcc0b/webhook-server/0.log" Jan 26 11:55:07 crc kubenswrapper[4619]: I0126 11:55:07.608740 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/frr/0.log" Jan 26 11:55:07 crc kubenswrapper[4619]: I0126 11:55:07.921080 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fdh28_a3fb0354-e5ca-4c6c-a008-44355d8dd331/kube-rbac-proxy/0.log" Jan 26 11:55:08 crc kubenswrapper[4619]: I0126 11:55:08.192166 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fdh28_a3fb0354-e5ca-4c6c-a008-44355d8dd331/speaker/0.log" Jan 26 11:55:14 crc kubenswrapper[4619]: I0126 11:55:14.235065 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:55:14 crc kubenswrapper[4619]: I0126 11:55:14.235654 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:55:20 crc kubenswrapper[4619]: I0126 11:55:20.290402 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5_f84b7e47-460a-490b-b407-ab46935b44ea/util/0.log" Jan 26 11:55:20 crc kubenswrapper[4619]: I0126 11:55:20.557355 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5_f84b7e47-460a-490b-b407-ab46935b44ea/pull/0.log" Jan 26 11:55:20 crc kubenswrapper[4619]: I0126 11:55:20.608491 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5_f84b7e47-460a-490b-b407-ab46935b44ea/pull/0.log" Jan 26 11:55:20 crc kubenswrapper[4619]: I0126 11:55:20.609727 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5_f84b7e47-460a-490b-b407-ab46935b44ea/util/0.log" Jan 26 11:55:20 crc kubenswrapper[4619]: I0126 11:55:20.806408 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5_f84b7e47-460a-490b-b407-ab46935b44ea/util/0.log" Jan 26 11:55:20 crc kubenswrapper[4619]: I0126 11:55:20.820366 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5_f84b7e47-460a-490b-b407-ab46935b44ea/extract/0.log" Jan 26 11:55:20 crc kubenswrapper[4619]: I0126 11:55:20.831116 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5_f84b7e47-460a-490b-b407-ab46935b44ea/pull/0.log" Jan 26 11:55:20 crc kubenswrapper[4619]: I0126 11:55:20.961825 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr_803f8495-c340-44e0-9b75-18fc9a944fd7/util/0.log" Jan 26 11:55:21 crc kubenswrapper[4619]: I0126 11:55:21.146827 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr_803f8495-c340-44e0-9b75-18fc9a944fd7/util/0.log" Jan 26 11:55:21 crc kubenswrapper[4619]: I0126 11:55:21.148147 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr_803f8495-c340-44e0-9b75-18fc9a944fd7/pull/0.log" Jan 26 11:55:21 crc kubenswrapper[4619]: I0126 11:55:21.160204 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr_803f8495-c340-44e0-9b75-18fc9a944fd7/pull/0.log" Jan 26 11:55:21 crc kubenswrapper[4619]: I0126 11:55:21.331240 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr_803f8495-c340-44e0-9b75-18fc9a944fd7/extract/0.log" Jan 26 11:55:21 crc kubenswrapper[4619]: I0126 11:55:21.340440 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr_803f8495-c340-44e0-9b75-18fc9a944fd7/pull/0.log" Jan 26 11:55:21 crc kubenswrapper[4619]: I0126 11:55:21.366542 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr_803f8495-c340-44e0-9b75-18fc9a944fd7/util/0.log" Jan 26 11:55:21 crc kubenswrapper[4619]: I0126 11:55:21.513773 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fxj2j_b7d685d9-1721-485a-b578-d56fa3c14d91/extract-utilities/0.log" Jan 26 11:55:21 crc kubenswrapper[4619]: I0126 11:55:21.683558 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fxj2j_b7d685d9-1721-485a-b578-d56fa3c14d91/extract-utilities/0.log" Jan 26 11:55:21 crc kubenswrapper[4619]: I0126 11:55:21.685883 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fxj2j_b7d685d9-1721-485a-b578-d56fa3c14d91/extract-content/0.log" Jan 26 11:55:21 crc kubenswrapper[4619]: I0126 11:55:21.691216 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fxj2j_b7d685d9-1721-485a-b578-d56fa3c14d91/extract-content/0.log" Jan 26 11:55:21 crc kubenswrapper[4619]: I0126 11:55:21.942798 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fxj2j_b7d685d9-1721-485a-b578-d56fa3c14d91/extract-content/0.log" Jan 26 11:55:21 crc kubenswrapper[4619]: I0126 11:55:21.987587 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fxj2j_b7d685d9-1721-485a-b578-d56fa3c14d91/extract-utilities/0.log" Jan 26 11:55:22 crc kubenswrapper[4619]: I0126 11:55:22.196818 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zhptm_d179e424-352f-4f5b-afd3-c68b8e79c096/extract-utilities/0.log" Jan 26 11:55:22 crc kubenswrapper[4619]: I0126 11:55:22.461948 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fxj2j_b7d685d9-1721-485a-b578-d56fa3c14d91/registry-server/0.log" Jan 26 11:55:22 crc kubenswrapper[4619]: I0126 11:55:22.482230 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zhptm_d179e424-352f-4f5b-afd3-c68b8e79c096/extract-content/0.log" Jan 26 11:55:22 crc kubenswrapper[4619]: I0126 11:55:22.492805 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zhptm_d179e424-352f-4f5b-afd3-c68b8e79c096/extract-utilities/0.log" Jan 26 11:55:22 crc kubenswrapper[4619]: I0126 11:55:22.535220 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zhptm_d179e424-352f-4f5b-afd3-c68b8e79c096/extract-content/0.log" Jan 26 11:55:22 crc kubenswrapper[4619]: I0126 11:55:22.755761 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zhptm_d179e424-352f-4f5b-afd3-c68b8e79c096/extract-content/0.log" Jan 26 11:55:22 crc kubenswrapper[4619]: I0126 11:55:22.771859 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zhptm_d179e424-352f-4f5b-afd3-c68b8e79c096/extract-utilities/0.log" Jan 26 11:55:23 crc kubenswrapper[4619]: I0126 11:55:23.051044 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-9klzt_d5665bb8-e8d9-4970-a1d0-db862b679458/marketplace-operator/0.log" Jan 26 11:55:23 crc kubenswrapper[4619]: I0126 11:55:23.105922 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zhptm_d179e424-352f-4f5b-afd3-c68b8e79c096/registry-server/0.log" Jan 26 11:55:23 crc kubenswrapper[4619]: I0126 11:55:23.149499 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pkzzx_2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b/extract-utilities/0.log" Jan 26 11:55:23 crc kubenswrapper[4619]: I0126 11:55:23.311692 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pkzzx_2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b/extract-utilities/0.log" Jan 26 11:55:23 crc kubenswrapper[4619]: I0126 11:55:23.364475 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pkzzx_2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b/extract-content/0.log" Jan 26 11:55:23 crc kubenswrapper[4619]: I0126 11:55:23.416214 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pkzzx_2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b/extract-content/0.log" Jan 26 11:55:23 crc kubenswrapper[4619]: I0126 11:55:23.511019 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pkzzx_2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b/extract-utilities/0.log" Jan 26 11:55:23 crc kubenswrapper[4619]: I0126 11:55:23.514630 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pkzzx_2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b/extract-content/0.log" Jan 26 11:55:23 crc kubenswrapper[4619]: I0126 11:55:23.692206 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pkzzx_2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b/registry-server/0.log" Jan 26 11:55:23 crc kubenswrapper[4619]: I0126 11:55:23.780524 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lqtxs_330030a7-d5b2-44ba-8612-30cd6ff41451/extract-utilities/0.log" Jan 26 11:55:23 crc kubenswrapper[4619]: I0126 11:55:23.980189 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lqtxs_330030a7-d5b2-44ba-8612-30cd6ff41451/extract-utilities/0.log" Jan 26 11:55:23 crc kubenswrapper[4619]: I0126 11:55:23.985368 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lqtxs_330030a7-d5b2-44ba-8612-30cd6ff41451/extract-content/0.log" Jan 26 11:55:24 crc kubenswrapper[4619]: I0126 11:55:24.005735 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lqtxs_330030a7-d5b2-44ba-8612-30cd6ff41451/extract-content/0.log" Jan 26 11:55:24 crc kubenswrapper[4619]: I0126 11:55:24.225042 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lqtxs_330030a7-d5b2-44ba-8612-30cd6ff41451/extract-content/0.log" Jan 26 11:55:24 crc kubenswrapper[4619]: I0126 11:55:24.251746 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lqtxs_330030a7-d5b2-44ba-8612-30cd6ff41451/extract-utilities/0.log" Jan 26 11:55:24 crc kubenswrapper[4619]: I0126 11:55:24.695269 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lqtxs_330030a7-d5b2-44ba-8612-30cd6ff41451/registry-server/0.log" Jan 26 11:55:44 crc kubenswrapper[4619]: I0126 11:55:44.234183 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:55:44 crc kubenswrapper[4619]: I0126 11:55:44.235768 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:56:14 crc kubenswrapper[4619]: I0126 11:56:14.234833 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:56:14 crc kubenswrapper[4619]: I0126 11:56:14.235506 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:56:14 crc kubenswrapper[4619]: I0126 11:56:14.235547 4619 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" Jan 26 11:56:14 crc kubenswrapper[4619]: I0126 11:56:14.236227 4619 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"66eae0a5212e47155ed1c1f31470a8210b6beb2b56a28c3fcecfeb831bb1f5d4"} pod="openshift-machine-config-operator/machine-config-daemon-28hd4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 11:56:14 crc kubenswrapper[4619]: I0126 11:56:14.236269 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" containerID="cri-o://66eae0a5212e47155ed1c1f31470a8210b6beb2b56a28c3fcecfeb831bb1f5d4" gracePeriod=600 Jan 26 11:56:14 crc kubenswrapper[4619]: E0126 11:56:14.371372 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:56:15 crc kubenswrapper[4619]: I0126 11:56:15.346479 4619 generic.go:334] "Generic (PLEG): container finished" podID="f33a41bb-6406-4c73-8024-4acd72817832" containerID="66eae0a5212e47155ed1c1f31470a8210b6beb2b56a28c3fcecfeb831bb1f5d4" exitCode=0 Jan 26 11:56:15 crc kubenswrapper[4619]: I0126 11:56:15.346572 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerDied","Data":"66eae0a5212e47155ed1c1f31470a8210b6beb2b56a28c3fcecfeb831bb1f5d4"} Jan 26 11:56:15 crc kubenswrapper[4619]: I0126 11:56:15.346909 4619 scope.go:117] "RemoveContainer" containerID="bc5d131803bc4df1a53c71187ed03cacec8fabe2fb107f3f19921da11278f5ea" Jan 26 11:56:15 crc kubenswrapper[4619]: I0126 11:56:15.347504 4619 scope.go:117] "RemoveContainer" containerID="66eae0a5212e47155ed1c1f31470a8210b6beb2b56a28c3fcecfeb831bb1f5d4" Jan 26 11:56:15 crc kubenswrapper[4619]: E0126 11:56:15.347881 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:56:27 crc kubenswrapper[4619]: I0126 11:56:27.261701 4619 scope.go:117] "RemoveContainer" containerID="66eae0a5212e47155ed1c1f31470a8210b6beb2b56a28c3fcecfeb831bb1f5d4" Jan 26 11:56:27 crc kubenswrapper[4619]: E0126 11:56:27.262555 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:56:39 crc kubenswrapper[4619]: I0126 11:56:39.261224 4619 scope.go:117] "RemoveContainer" containerID="66eae0a5212e47155ed1c1f31470a8210b6beb2b56a28c3fcecfeb831bb1f5d4" Jan 26 11:56:39 crc kubenswrapper[4619]: E0126 11:56:39.262006 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:56:52 crc kubenswrapper[4619]: I0126 11:56:52.262831 4619 scope.go:117] "RemoveContainer" containerID="66eae0a5212e47155ed1c1f31470a8210b6beb2b56a28c3fcecfeb831bb1f5d4" Jan 26 11:56:52 crc kubenswrapper[4619]: E0126 11:56:52.263721 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:57:05 crc kubenswrapper[4619]: I0126 11:57:05.264474 4619 scope.go:117] "RemoveContainer" containerID="66eae0a5212e47155ed1c1f31470a8210b6beb2b56a28c3fcecfeb831bb1f5d4" Jan 26 11:57:05 crc kubenswrapper[4619]: E0126 11:57:05.265358 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:57:18 crc kubenswrapper[4619]: I0126 11:57:18.261745 4619 scope.go:117] "RemoveContainer" containerID="66eae0a5212e47155ed1c1f31470a8210b6beb2b56a28c3fcecfeb831bb1f5d4" Jan 26 11:57:18 crc kubenswrapper[4619]: E0126 11:57:18.262550 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:57:23 crc kubenswrapper[4619]: I0126 11:57:23.939240 4619 generic.go:334] "Generic (PLEG): container finished" podID="56faf884-e166-465d-89d7-f5e3c60acd5e" containerID="aad02af9f1a3838b557b6dc08253644da584fcc8f1422871f2d7a08cd9f2ee5a" exitCode=0 Jan 26 11:57:23 crc kubenswrapper[4619]: I0126 11:57:23.939318 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cv5tk/must-gather-s8kkt" event={"ID":"56faf884-e166-465d-89d7-f5e3c60acd5e","Type":"ContainerDied","Data":"aad02af9f1a3838b557b6dc08253644da584fcc8f1422871f2d7a08cd9f2ee5a"} Jan 26 11:57:23 crc kubenswrapper[4619]: I0126 11:57:23.940280 4619 scope.go:117] "RemoveContainer" containerID="aad02af9f1a3838b557b6dc08253644da584fcc8f1422871f2d7a08cd9f2ee5a" Jan 26 11:57:24 crc kubenswrapper[4619]: I0126 11:57:24.219254 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cv5tk_must-gather-s8kkt_56faf884-e166-465d-89d7-f5e3c60acd5e/gather/0.log" Jan 26 11:57:32 crc kubenswrapper[4619]: I0126 11:57:32.499172 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cv5tk/must-gather-s8kkt"] Jan 26 11:57:32 crc kubenswrapper[4619]: I0126 11:57:32.499801 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-cv5tk/must-gather-s8kkt" podUID="56faf884-e166-465d-89d7-f5e3c60acd5e" containerName="copy" containerID="cri-o://3cac621e4c0cb37dd0fd2e47f067f6a1aaf8cb472349fc52f28711f6bb752c10" gracePeriod=2 Jan 26 11:57:32 crc kubenswrapper[4619]: I0126 11:57:32.511029 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cv5tk/must-gather-s8kkt"] Jan 26 11:57:32 crc kubenswrapper[4619]: I0126 11:57:32.902095 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cv5tk_must-gather-s8kkt_56faf884-e166-465d-89d7-f5e3c60acd5e/copy/0.log" Jan 26 11:57:32 crc kubenswrapper[4619]: I0126 11:57:32.902839 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cv5tk/must-gather-s8kkt" Jan 26 11:57:32 crc kubenswrapper[4619]: I0126 11:57:32.956025 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92b9j\" (UniqueName: \"kubernetes.io/projected/56faf884-e166-465d-89d7-f5e3c60acd5e-kube-api-access-92b9j\") pod \"56faf884-e166-465d-89d7-f5e3c60acd5e\" (UID: \"56faf884-e166-465d-89d7-f5e3c60acd5e\") " Jan 26 11:57:32 crc kubenswrapper[4619]: I0126 11:57:32.956086 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56faf884-e166-465d-89d7-f5e3c60acd5e-must-gather-output\") pod \"56faf884-e166-465d-89d7-f5e3c60acd5e\" (UID: \"56faf884-e166-465d-89d7-f5e3c60acd5e\") " Jan 26 11:57:32 crc kubenswrapper[4619]: I0126 11:57:32.962243 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56faf884-e166-465d-89d7-f5e3c60acd5e-kube-api-access-92b9j" (OuterVolumeSpecName: "kube-api-access-92b9j") pod "56faf884-e166-465d-89d7-f5e3c60acd5e" (UID: "56faf884-e166-465d-89d7-f5e3c60acd5e"). InnerVolumeSpecName "kube-api-access-92b9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:57:33 crc kubenswrapper[4619]: I0126 11:57:33.017916 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cv5tk_must-gather-s8kkt_56faf884-e166-465d-89d7-f5e3c60acd5e/copy/0.log" Jan 26 11:57:33 crc kubenswrapper[4619]: I0126 11:57:33.018225 4619 generic.go:334] "Generic (PLEG): container finished" podID="56faf884-e166-465d-89d7-f5e3c60acd5e" containerID="3cac621e4c0cb37dd0fd2e47f067f6a1aaf8cb472349fc52f28711f6bb752c10" exitCode=143 Jan 26 11:57:33 crc kubenswrapper[4619]: I0126 11:57:33.018269 4619 scope.go:117] "RemoveContainer" containerID="3cac621e4c0cb37dd0fd2e47f067f6a1aaf8cb472349fc52f28711f6bb752c10" Jan 26 11:57:33 crc kubenswrapper[4619]: I0126 11:57:33.018382 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cv5tk/must-gather-s8kkt" Jan 26 11:57:33 crc kubenswrapper[4619]: I0126 11:57:33.044683 4619 scope.go:117] "RemoveContainer" containerID="aad02af9f1a3838b557b6dc08253644da584fcc8f1422871f2d7a08cd9f2ee5a" Jan 26 11:57:33 crc kubenswrapper[4619]: I0126 11:57:33.058806 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92b9j\" (UniqueName: \"kubernetes.io/projected/56faf884-e166-465d-89d7-f5e3c60acd5e-kube-api-access-92b9j\") on node \"crc\" DevicePath \"\"" Jan 26 11:57:33 crc kubenswrapper[4619]: I0126 11:57:33.087351 4619 scope.go:117] "RemoveContainer" containerID="3cac621e4c0cb37dd0fd2e47f067f6a1aaf8cb472349fc52f28711f6bb752c10" Jan 26 11:57:33 crc kubenswrapper[4619]: E0126 11:57:33.089808 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cac621e4c0cb37dd0fd2e47f067f6a1aaf8cb472349fc52f28711f6bb752c10\": container with ID starting with 3cac621e4c0cb37dd0fd2e47f067f6a1aaf8cb472349fc52f28711f6bb752c10 not found: ID does not exist" containerID="3cac621e4c0cb37dd0fd2e47f067f6a1aaf8cb472349fc52f28711f6bb752c10" Jan 26 11:57:33 crc kubenswrapper[4619]: I0126 11:57:33.089853 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cac621e4c0cb37dd0fd2e47f067f6a1aaf8cb472349fc52f28711f6bb752c10"} err="failed to get container status \"3cac621e4c0cb37dd0fd2e47f067f6a1aaf8cb472349fc52f28711f6bb752c10\": rpc error: code = NotFound desc = could not find container \"3cac621e4c0cb37dd0fd2e47f067f6a1aaf8cb472349fc52f28711f6bb752c10\": container with ID starting with 3cac621e4c0cb37dd0fd2e47f067f6a1aaf8cb472349fc52f28711f6bb752c10 not found: ID does not exist" Jan 26 11:57:33 crc kubenswrapper[4619]: I0126 11:57:33.089879 4619 scope.go:117] "RemoveContainer" containerID="aad02af9f1a3838b557b6dc08253644da584fcc8f1422871f2d7a08cd9f2ee5a" Jan 26 11:57:33 crc kubenswrapper[4619]: E0126 11:57:33.090180 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aad02af9f1a3838b557b6dc08253644da584fcc8f1422871f2d7a08cd9f2ee5a\": container with ID starting with aad02af9f1a3838b557b6dc08253644da584fcc8f1422871f2d7a08cd9f2ee5a not found: ID does not exist" containerID="aad02af9f1a3838b557b6dc08253644da584fcc8f1422871f2d7a08cd9f2ee5a" Jan 26 11:57:33 crc kubenswrapper[4619]: I0126 11:57:33.090204 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aad02af9f1a3838b557b6dc08253644da584fcc8f1422871f2d7a08cd9f2ee5a"} err="failed to get container status \"aad02af9f1a3838b557b6dc08253644da584fcc8f1422871f2d7a08cd9f2ee5a\": rpc error: code = NotFound desc = could not find container \"aad02af9f1a3838b557b6dc08253644da584fcc8f1422871f2d7a08cd9f2ee5a\": container with ID starting with aad02af9f1a3838b557b6dc08253644da584fcc8f1422871f2d7a08cd9f2ee5a not found: ID does not exist" Jan 26 11:57:33 crc kubenswrapper[4619]: I0126 11:57:33.107663 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56faf884-e166-465d-89d7-f5e3c60acd5e-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "56faf884-e166-465d-89d7-f5e3c60acd5e" (UID: "56faf884-e166-465d-89d7-f5e3c60acd5e"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:57:33 crc kubenswrapper[4619]: I0126 11:57:33.160174 4619 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56faf884-e166-465d-89d7-f5e3c60acd5e-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 11:57:33 crc kubenswrapper[4619]: I0126 11:57:33.261877 4619 scope.go:117] "RemoveContainer" containerID="66eae0a5212e47155ed1c1f31470a8210b6beb2b56a28c3fcecfeb831bb1f5d4" Jan 26 11:57:33 crc kubenswrapper[4619]: E0126 11:57:33.262364 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:57:33 crc kubenswrapper[4619]: I0126 11:57:33.305317 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56faf884-e166-465d-89d7-f5e3c60acd5e" path="/var/lib/kubelet/pods/56faf884-e166-465d-89d7-f5e3c60acd5e/volumes" Jan 26 11:57:44 crc kubenswrapper[4619]: I0126 11:57:44.260837 4619 scope.go:117] "RemoveContainer" containerID="66eae0a5212e47155ed1c1f31470a8210b6beb2b56a28c3fcecfeb831bb1f5d4" Jan 26 11:57:44 crc kubenswrapper[4619]: E0126 11:57:44.261526 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:57:56 crc kubenswrapper[4619]: I0126 11:57:56.261572 4619 scope.go:117] "RemoveContainer" containerID="66eae0a5212e47155ed1c1f31470a8210b6beb2b56a28c3fcecfeb831bb1f5d4" Jan 26 11:57:56 crc kubenswrapper[4619]: E0126 11:57:56.263708 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:58:09 crc kubenswrapper[4619]: I0126 11:58:09.261763 4619 scope.go:117] "RemoveContainer" containerID="66eae0a5212e47155ed1c1f31470a8210b6beb2b56a28c3fcecfeb831bb1f5d4" Jan 26 11:58:09 crc kubenswrapper[4619]: E0126 11:58:09.262960 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:58:21 crc kubenswrapper[4619]: I0126 11:58:21.268187 4619 scope.go:117] "RemoveContainer" containerID="66eae0a5212e47155ed1c1f31470a8210b6beb2b56a28c3fcecfeb831bb1f5d4" Jan 26 11:58:21 crc kubenswrapper[4619]: E0126 11:58:21.268925 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:58:32 crc kubenswrapper[4619]: I0126 11:58:32.261258 4619 scope.go:117] "RemoveContainer" containerID="66eae0a5212e47155ed1c1f31470a8210b6beb2b56a28c3fcecfeb831bb1f5d4" Jan 26 11:58:32 crc kubenswrapper[4619]: E0126 11:58:32.261955 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:58:33 crc kubenswrapper[4619]: I0126 11:58:33.573516 4619 scope.go:117] "RemoveContainer" containerID="9533b297f10d12b67b6c836734389e94d9e0944250d751e3dbb1c54aa31a3f13" Jan 26 11:58:46 crc kubenswrapper[4619]: I0126 11:58:46.261907 4619 scope.go:117] "RemoveContainer" containerID="66eae0a5212e47155ed1c1f31470a8210b6beb2b56a28c3fcecfeb831bb1f5d4" Jan 26 11:58:46 crc kubenswrapper[4619]: E0126 11:58:46.262837 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:58:59 crc kubenswrapper[4619]: I0126 11:58:59.263115 4619 scope.go:117] "RemoveContainer" containerID="66eae0a5212e47155ed1c1f31470a8210b6beb2b56a28c3fcecfeb831bb1f5d4" Jan 26 11:58:59 crc kubenswrapper[4619]: E0126 11:58:59.265369 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:59:13 crc kubenswrapper[4619]: I0126 11:59:13.261315 4619 scope.go:117] "RemoveContainer" containerID="66eae0a5212e47155ed1c1f31470a8210b6beb2b56a28c3fcecfeb831bb1f5d4" Jan 26 11:59:13 crc kubenswrapper[4619]: E0126 11:59:13.261915 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:59:28 crc kubenswrapper[4619]: I0126 11:59:28.261847 4619 scope.go:117] "RemoveContainer" containerID="66eae0a5212e47155ed1c1f31470a8210b6beb2b56a28c3fcecfeb831bb1f5d4" Jan 26 11:59:28 crc kubenswrapper[4619]: E0126 11:59:28.262799 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:59:42 crc kubenswrapper[4619]: I0126 11:59:42.262376 4619 scope.go:117] "RemoveContainer" containerID="66eae0a5212e47155ed1c1f31470a8210b6beb2b56a28c3fcecfeb831bb1f5d4" Jan 26 11:59:42 crc kubenswrapper[4619]: E0126 11:59:42.263182 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 11:59:55 crc kubenswrapper[4619]: I0126 11:59:55.260989 4619 scope.go:117] "RemoveContainer" containerID="66eae0a5212e47155ed1c1f31470a8210b6beb2b56a28c3fcecfeb831bb1f5d4" Jan 26 11:59:55 crc kubenswrapper[4619]: E0126 11:59:55.261888 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 12:00:00 crc kubenswrapper[4619]: I0126 12:00:00.221609 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490480-x6n5s"] Jan 26 12:00:00 crc kubenswrapper[4619]: E0126 12:00:00.222432 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3e1388-0660-42cb-992b-32de4e5d52d5" containerName="registry-server" Jan 26 12:00:00 crc kubenswrapper[4619]: I0126 12:00:00.222444 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3e1388-0660-42cb-992b-32de4e5d52d5" containerName="registry-server" Jan 26 12:00:00 crc kubenswrapper[4619]: E0126 12:00:00.222464 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56faf884-e166-465d-89d7-f5e3c60acd5e" containerName="gather" Jan 26 12:00:00 crc kubenswrapper[4619]: I0126 12:00:00.222471 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="56faf884-e166-465d-89d7-f5e3c60acd5e" containerName="gather" Jan 26 12:00:00 crc kubenswrapper[4619]: E0126 12:00:00.222486 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56faf884-e166-465d-89d7-f5e3c60acd5e" containerName="copy" Jan 26 12:00:00 crc kubenswrapper[4619]: I0126 12:00:00.222492 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="56faf884-e166-465d-89d7-f5e3c60acd5e" containerName="copy" Jan 26 12:00:00 crc kubenswrapper[4619]: E0126 12:00:00.222516 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3e1388-0660-42cb-992b-32de4e5d52d5" containerName="extract-content" Jan 26 12:00:00 crc kubenswrapper[4619]: I0126 12:00:00.222521 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3e1388-0660-42cb-992b-32de4e5d52d5" containerName="extract-content" Jan 26 12:00:00 crc kubenswrapper[4619]: E0126 12:00:00.222532 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3e1388-0660-42cb-992b-32de4e5d52d5" containerName="extract-utilities" Jan 26 12:00:00 crc kubenswrapper[4619]: I0126 12:00:00.222538 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3e1388-0660-42cb-992b-32de4e5d52d5" containerName="extract-utilities" Jan 26 12:00:00 crc kubenswrapper[4619]: I0126 12:00:00.222712 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3e1388-0660-42cb-992b-32de4e5d52d5" containerName="registry-server" Jan 26 12:00:00 crc kubenswrapper[4619]: I0126 12:00:00.222725 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="56faf884-e166-465d-89d7-f5e3c60acd5e" containerName="copy" Jan 26 12:00:00 crc kubenswrapper[4619]: I0126 12:00:00.222753 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="56faf884-e166-465d-89d7-f5e3c60acd5e" containerName="gather" Jan 26 12:00:00 crc kubenswrapper[4619]: I0126 12:00:00.223376 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-x6n5s" Jan 26 12:00:00 crc kubenswrapper[4619]: I0126 12:00:00.232171 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 12:00:00 crc kubenswrapper[4619]: I0126 12:00:00.232798 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 12:00:00 crc kubenswrapper[4619]: I0126 12:00:00.239541 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490480-x6n5s"] Jan 26 12:00:00 crc kubenswrapper[4619]: I0126 12:00:00.324911 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8aa3636b-dbd7-49c1-818b-46c7740e1b17-config-volume\") pod \"collect-profiles-29490480-x6n5s\" (UID: \"8aa3636b-dbd7-49c1-818b-46c7740e1b17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-x6n5s" Jan 26 12:00:00 crc kubenswrapper[4619]: I0126 12:00:00.325012 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlmm5\" (UniqueName: \"kubernetes.io/projected/8aa3636b-dbd7-49c1-818b-46c7740e1b17-kube-api-access-qlmm5\") pod \"collect-profiles-29490480-x6n5s\" (UID: \"8aa3636b-dbd7-49c1-818b-46c7740e1b17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-x6n5s" Jan 26 12:00:00 crc kubenswrapper[4619]: I0126 12:00:00.325052 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8aa3636b-dbd7-49c1-818b-46c7740e1b17-secret-volume\") pod \"collect-profiles-29490480-x6n5s\" (UID: \"8aa3636b-dbd7-49c1-818b-46c7740e1b17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-x6n5s" Jan 26 12:00:00 crc kubenswrapper[4619]: I0126 12:00:00.427558 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlmm5\" (UniqueName: \"kubernetes.io/projected/8aa3636b-dbd7-49c1-818b-46c7740e1b17-kube-api-access-qlmm5\") pod \"collect-profiles-29490480-x6n5s\" (UID: \"8aa3636b-dbd7-49c1-818b-46c7740e1b17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-x6n5s" Jan 26 12:00:00 crc kubenswrapper[4619]: I0126 12:00:00.427673 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8aa3636b-dbd7-49c1-818b-46c7740e1b17-secret-volume\") pod \"collect-profiles-29490480-x6n5s\" (UID: \"8aa3636b-dbd7-49c1-818b-46c7740e1b17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-x6n5s" Jan 26 12:00:00 crc kubenswrapper[4619]: I0126 12:00:00.428494 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8aa3636b-dbd7-49c1-818b-46c7740e1b17-config-volume\") pod \"collect-profiles-29490480-x6n5s\" (UID: \"8aa3636b-dbd7-49c1-818b-46c7740e1b17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-x6n5s" Jan 26 12:00:00 crc kubenswrapper[4619]: I0126 12:00:00.429742 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8aa3636b-dbd7-49c1-818b-46c7740e1b17-config-volume\") pod \"collect-profiles-29490480-x6n5s\" (UID: \"8aa3636b-dbd7-49c1-818b-46c7740e1b17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-x6n5s" Jan 26 12:00:00 crc kubenswrapper[4619]: I0126 12:00:00.434325 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8aa3636b-dbd7-49c1-818b-46c7740e1b17-secret-volume\") pod \"collect-profiles-29490480-x6n5s\" (UID: \"8aa3636b-dbd7-49c1-818b-46c7740e1b17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-x6n5s" Jan 26 12:00:00 crc kubenswrapper[4619]: I0126 12:00:00.445861 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlmm5\" (UniqueName: \"kubernetes.io/projected/8aa3636b-dbd7-49c1-818b-46c7740e1b17-kube-api-access-qlmm5\") pod \"collect-profiles-29490480-x6n5s\" (UID: \"8aa3636b-dbd7-49c1-818b-46c7740e1b17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-x6n5s" Jan 26 12:00:00 crc kubenswrapper[4619]: I0126 12:00:00.547194 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-x6n5s" Jan 26 12:00:00 crc kubenswrapper[4619]: I0126 12:00:00.987228 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490480-x6n5s"] Jan 26 12:00:01 crc kubenswrapper[4619]: I0126 12:00:01.353169 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-x6n5s" event={"ID":"8aa3636b-dbd7-49c1-818b-46c7740e1b17","Type":"ContainerStarted","Data":"cc19b27c7afedd184b59db06d0648942febc22b98ef650a6bfbb4f4cca7bbe8a"} Jan 26 12:00:01 crc kubenswrapper[4619]: I0126 12:00:01.353697 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-x6n5s" event={"ID":"8aa3636b-dbd7-49c1-818b-46c7740e1b17","Type":"ContainerStarted","Data":"d53de92595569838bcc3cf2e4f4b3297ad02e2fd761966ba4fd9711fe89e1647"} Jan 26 12:00:01 crc kubenswrapper[4619]: I0126 12:00:01.381561 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-x6n5s" podStartSLOduration=1.381541396 podStartE2EDuration="1.381541396s" podCreationTimestamp="2026-01-26 12:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:00:01.373136799 +0000 UTC m=+3900.407177535" watchObservedRunningTime="2026-01-26 12:00:01.381541396 +0000 UTC m=+3900.415582112" Jan 26 12:00:02 crc kubenswrapper[4619]: I0126 12:00:02.396548 4619 generic.go:334] "Generic (PLEG): container finished" podID="8aa3636b-dbd7-49c1-818b-46c7740e1b17" containerID="cc19b27c7afedd184b59db06d0648942febc22b98ef650a6bfbb4f4cca7bbe8a" exitCode=0 Jan 26 12:00:02 crc kubenswrapper[4619]: I0126 12:00:02.396633 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-x6n5s" event={"ID":"8aa3636b-dbd7-49c1-818b-46c7740e1b17","Type":"ContainerDied","Data":"cc19b27c7afedd184b59db06d0648942febc22b98ef650a6bfbb4f4cca7bbe8a"} Jan 26 12:00:03 crc kubenswrapper[4619]: I0126 12:00:03.768154 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-x6n5s" Jan 26 12:00:03 crc kubenswrapper[4619]: I0126 12:00:03.930723 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8aa3636b-dbd7-49c1-818b-46c7740e1b17-config-volume\") pod \"8aa3636b-dbd7-49c1-818b-46c7740e1b17\" (UID: \"8aa3636b-dbd7-49c1-818b-46c7740e1b17\") " Jan 26 12:00:03 crc kubenswrapper[4619]: I0126 12:00:03.931218 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8aa3636b-dbd7-49c1-818b-46c7740e1b17-config-volume" (OuterVolumeSpecName: "config-volume") pod "8aa3636b-dbd7-49c1-818b-46c7740e1b17" (UID: "8aa3636b-dbd7-49c1-818b-46c7740e1b17"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:00:03 crc kubenswrapper[4619]: I0126 12:00:03.930790 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlmm5\" (UniqueName: \"kubernetes.io/projected/8aa3636b-dbd7-49c1-818b-46c7740e1b17-kube-api-access-qlmm5\") pod \"8aa3636b-dbd7-49c1-818b-46c7740e1b17\" (UID: \"8aa3636b-dbd7-49c1-818b-46c7740e1b17\") " Jan 26 12:00:03 crc kubenswrapper[4619]: I0126 12:00:03.932111 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8aa3636b-dbd7-49c1-818b-46c7740e1b17-secret-volume\") pod \"8aa3636b-dbd7-49c1-818b-46c7740e1b17\" (UID: \"8aa3636b-dbd7-49c1-818b-46c7740e1b17\") " Jan 26 12:00:03 crc kubenswrapper[4619]: I0126 12:00:03.932789 4619 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8aa3636b-dbd7-49c1-818b-46c7740e1b17-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 12:00:03 crc kubenswrapper[4619]: I0126 12:00:03.956207 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8aa3636b-dbd7-49c1-818b-46c7740e1b17-kube-api-access-qlmm5" (OuterVolumeSpecName: "kube-api-access-qlmm5") pod "8aa3636b-dbd7-49c1-818b-46c7740e1b17" (UID: "8aa3636b-dbd7-49c1-818b-46c7740e1b17"). InnerVolumeSpecName "kube-api-access-qlmm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:00:03 crc kubenswrapper[4619]: I0126 12:00:03.966051 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8aa3636b-dbd7-49c1-818b-46c7740e1b17-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8aa3636b-dbd7-49c1-818b-46c7740e1b17" (UID: "8aa3636b-dbd7-49c1-818b-46c7740e1b17"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:00:04 crc kubenswrapper[4619]: I0126 12:00:04.034641 4619 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8aa3636b-dbd7-49c1-818b-46c7740e1b17-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 12:00:04 crc kubenswrapper[4619]: I0126 12:00:04.034687 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlmm5\" (UniqueName: \"kubernetes.io/projected/8aa3636b-dbd7-49c1-818b-46c7740e1b17-kube-api-access-qlmm5\") on node \"crc\" DevicePath \"\"" Jan 26 12:00:04 crc kubenswrapper[4619]: I0126 12:00:04.421074 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-x6n5s" event={"ID":"8aa3636b-dbd7-49c1-818b-46c7740e1b17","Type":"ContainerDied","Data":"d53de92595569838bcc3cf2e4f4b3297ad02e2fd761966ba4fd9711fe89e1647"} Jan 26 12:00:04 crc kubenswrapper[4619]: I0126 12:00:04.421109 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d53de92595569838bcc3cf2e4f4b3297ad02e2fd761966ba4fd9711fe89e1647" Jan 26 12:00:04 crc kubenswrapper[4619]: I0126 12:00:04.421132 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-x6n5s" Jan 26 12:00:04 crc kubenswrapper[4619]: I0126 12:00:04.450847 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490435-dgpsd"] Jan 26 12:00:04 crc kubenswrapper[4619]: I0126 12:00:04.463886 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490435-dgpsd"] Jan 26 12:00:05 crc kubenswrapper[4619]: I0126 12:00:05.272575 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f339338-c587-4b52-98f1-44b46fab9b40" path="/var/lib/kubelet/pods/0f339338-c587-4b52-98f1-44b46fab9b40/volumes" Jan 26 12:00:09 crc kubenswrapper[4619]: I0126 12:00:09.261751 4619 scope.go:117] "RemoveContainer" containerID="66eae0a5212e47155ed1c1f31470a8210b6beb2b56a28c3fcecfeb831bb1f5d4" Jan 26 12:00:09 crc kubenswrapper[4619]: E0126 12:00:09.263146 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 12:00:21 crc kubenswrapper[4619]: I0126 12:00:21.270523 4619 scope.go:117] "RemoveContainer" containerID="66eae0a5212e47155ed1c1f31470a8210b6beb2b56a28c3fcecfeb831bb1f5d4" Jan 26 12:00:21 crc kubenswrapper[4619]: E0126 12:00:21.271427 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 12:00:30 crc kubenswrapper[4619]: I0126 12:00:30.025469 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-w267c/must-gather-lpnxr"] Jan 26 12:00:30 crc kubenswrapper[4619]: E0126 12:00:30.027419 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aa3636b-dbd7-49c1-818b-46c7740e1b17" containerName="collect-profiles" Jan 26 12:00:30 crc kubenswrapper[4619]: I0126 12:00:30.027496 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aa3636b-dbd7-49c1-818b-46c7740e1b17" containerName="collect-profiles" Jan 26 12:00:30 crc kubenswrapper[4619]: I0126 12:00:30.027730 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aa3636b-dbd7-49c1-818b-46c7740e1b17" containerName="collect-profiles" Jan 26 12:00:30 crc kubenswrapper[4619]: I0126 12:00:30.028893 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w267c/must-gather-lpnxr" Jan 26 12:00:30 crc kubenswrapper[4619]: I0126 12:00:30.031133 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-w267c"/"openshift-service-ca.crt" Jan 26 12:00:30 crc kubenswrapper[4619]: I0126 12:00:30.031202 4619 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-w267c"/"kube-root-ca.crt" Jan 26 12:00:30 crc kubenswrapper[4619]: I0126 12:00:30.043759 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-w267c/must-gather-lpnxr"] Jan 26 12:00:30 crc kubenswrapper[4619]: I0126 12:00:30.044545 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2544c704-5423-4b36-b2e2-f98cf6306451-must-gather-output\") pod \"must-gather-lpnxr\" (UID: \"2544c704-5423-4b36-b2e2-f98cf6306451\") " pod="openshift-must-gather-w267c/must-gather-lpnxr" Jan 26 12:00:30 crc kubenswrapper[4619]: I0126 12:00:30.044750 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8fwh\" (UniqueName: \"kubernetes.io/projected/2544c704-5423-4b36-b2e2-f98cf6306451-kube-api-access-n8fwh\") pod \"must-gather-lpnxr\" (UID: \"2544c704-5423-4b36-b2e2-f98cf6306451\") " pod="openshift-must-gather-w267c/must-gather-lpnxr" Jan 26 12:00:30 crc kubenswrapper[4619]: I0126 12:00:30.152325 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8fwh\" (UniqueName: \"kubernetes.io/projected/2544c704-5423-4b36-b2e2-f98cf6306451-kube-api-access-n8fwh\") pod \"must-gather-lpnxr\" (UID: \"2544c704-5423-4b36-b2e2-f98cf6306451\") " pod="openshift-must-gather-w267c/must-gather-lpnxr" Jan 26 12:00:30 crc kubenswrapper[4619]: I0126 12:00:30.152452 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2544c704-5423-4b36-b2e2-f98cf6306451-must-gather-output\") pod \"must-gather-lpnxr\" (UID: \"2544c704-5423-4b36-b2e2-f98cf6306451\") " pod="openshift-must-gather-w267c/must-gather-lpnxr" Jan 26 12:00:30 crc kubenswrapper[4619]: I0126 12:00:30.152876 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2544c704-5423-4b36-b2e2-f98cf6306451-must-gather-output\") pod \"must-gather-lpnxr\" (UID: \"2544c704-5423-4b36-b2e2-f98cf6306451\") " pod="openshift-must-gather-w267c/must-gather-lpnxr" Jan 26 12:00:30 crc kubenswrapper[4619]: I0126 12:00:30.174287 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8fwh\" (UniqueName: \"kubernetes.io/projected/2544c704-5423-4b36-b2e2-f98cf6306451-kube-api-access-n8fwh\") pod \"must-gather-lpnxr\" (UID: \"2544c704-5423-4b36-b2e2-f98cf6306451\") " pod="openshift-must-gather-w267c/must-gather-lpnxr" Jan 26 12:00:30 crc kubenswrapper[4619]: I0126 12:00:30.349577 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w267c/must-gather-lpnxr" Jan 26 12:00:31 crc kubenswrapper[4619]: I0126 12:00:31.027558 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-w267c/must-gather-lpnxr"] Jan 26 12:00:31 crc kubenswrapper[4619]: I0126 12:00:31.650875 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w267c/must-gather-lpnxr" event={"ID":"2544c704-5423-4b36-b2e2-f98cf6306451","Type":"ContainerStarted","Data":"c06904f48e7b7122625f9272ee276e597a611ad12e4cecd2c85c1b2c05d4aaca"} Jan 26 12:00:31 crc kubenswrapper[4619]: I0126 12:00:31.651163 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w267c/must-gather-lpnxr" event={"ID":"2544c704-5423-4b36-b2e2-f98cf6306451","Type":"ContainerStarted","Data":"f9018ffa8c63d71479052d1883c2784b2bb4e3f2a56544b787f682a7d29af88f"} Jan 26 12:00:32 crc kubenswrapper[4619]: I0126 12:00:32.664684 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w267c/must-gather-lpnxr" event={"ID":"2544c704-5423-4b36-b2e2-f98cf6306451","Type":"ContainerStarted","Data":"59adf29c735904c64085d15c171cbf6e49b9e3a770671dbdf261c8433e13e56d"} Jan 26 12:00:32 crc kubenswrapper[4619]: I0126 12:00:32.684705 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-w267c/must-gather-lpnxr" podStartSLOduration=3.684684137 podStartE2EDuration="3.684684137s" podCreationTimestamp="2026-01-26 12:00:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:00:32.684490492 +0000 UTC m=+3931.718531208" watchObservedRunningTime="2026-01-26 12:00:32.684684137 +0000 UTC m=+3931.718724853" Jan 26 12:00:33 crc kubenswrapper[4619]: I0126 12:00:33.643035 4619 scope.go:117] "RemoveContainer" containerID="ada707d0a1cfec1326004e3932d9ede1e233ce7e2e505b9477b1c8a72bebef90" Jan 26 12:00:33 crc kubenswrapper[4619]: E0126 12:00:33.708201 4619 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.69:41668->38.102.83.69:40211: write tcp 38.102.83.69:41668->38.102.83.69:40211: write: broken pipe Jan 26 12:00:35 crc kubenswrapper[4619]: I0126 12:00:35.896561 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-w267c/crc-debug-m7q7h"] Jan 26 12:00:35 crc kubenswrapper[4619]: I0126 12:00:35.898312 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w267c/crc-debug-m7q7h" Jan 26 12:00:35 crc kubenswrapper[4619]: I0126 12:00:35.901247 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-w267c"/"default-dockercfg-zkm9s" Jan 26 12:00:36 crc kubenswrapper[4619]: I0126 12:00:36.073691 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpq6x\" (UniqueName: \"kubernetes.io/projected/48e00634-b1e2-4c5e-b067-beabe362c617-kube-api-access-fpq6x\") pod \"crc-debug-m7q7h\" (UID: \"48e00634-b1e2-4c5e-b067-beabe362c617\") " pod="openshift-must-gather-w267c/crc-debug-m7q7h" Jan 26 12:00:36 crc kubenswrapper[4619]: I0126 12:00:36.074075 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/48e00634-b1e2-4c5e-b067-beabe362c617-host\") pod \"crc-debug-m7q7h\" (UID: \"48e00634-b1e2-4c5e-b067-beabe362c617\") " pod="openshift-must-gather-w267c/crc-debug-m7q7h" Jan 26 12:00:36 crc kubenswrapper[4619]: I0126 12:00:36.175742 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/48e00634-b1e2-4c5e-b067-beabe362c617-host\") pod \"crc-debug-m7q7h\" (UID: \"48e00634-b1e2-4c5e-b067-beabe362c617\") " pod="openshift-must-gather-w267c/crc-debug-m7q7h" Jan 26 12:00:36 crc kubenswrapper[4619]: I0126 12:00:36.175844 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpq6x\" (UniqueName: \"kubernetes.io/projected/48e00634-b1e2-4c5e-b067-beabe362c617-kube-api-access-fpq6x\") pod \"crc-debug-m7q7h\" (UID: \"48e00634-b1e2-4c5e-b067-beabe362c617\") " pod="openshift-must-gather-w267c/crc-debug-m7q7h" Jan 26 12:00:36 crc kubenswrapper[4619]: I0126 12:00:36.175937 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/48e00634-b1e2-4c5e-b067-beabe362c617-host\") pod \"crc-debug-m7q7h\" (UID: \"48e00634-b1e2-4c5e-b067-beabe362c617\") " pod="openshift-must-gather-w267c/crc-debug-m7q7h" Jan 26 12:00:36 crc kubenswrapper[4619]: I0126 12:00:36.205792 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpq6x\" (UniqueName: \"kubernetes.io/projected/48e00634-b1e2-4c5e-b067-beabe362c617-kube-api-access-fpq6x\") pod \"crc-debug-m7q7h\" (UID: \"48e00634-b1e2-4c5e-b067-beabe362c617\") " pod="openshift-must-gather-w267c/crc-debug-m7q7h" Jan 26 12:00:36 crc kubenswrapper[4619]: I0126 12:00:36.232269 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w267c/crc-debug-m7q7h" Jan 26 12:00:36 crc kubenswrapper[4619]: I0126 12:00:36.261608 4619 scope.go:117] "RemoveContainer" containerID="66eae0a5212e47155ed1c1f31470a8210b6beb2b56a28c3fcecfeb831bb1f5d4" Jan 26 12:00:36 crc kubenswrapper[4619]: E0126 12:00:36.262146 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 12:00:36 crc kubenswrapper[4619]: W0126 12:00:36.271876 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48e00634_b1e2_4c5e_b067_beabe362c617.slice/crio-931532b20395d6968c5228e4912e42b134c87f9f88b6b42741545d5969633b3c WatchSource:0}: Error finding container 931532b20395d6968c5228e4912e42b134c87f9f88b6b42741545d5969633b3c: Status 404 returned error can't find the container with id 931532b20395d6968c5228e4912e42b134c87f9f88b6b42741545d5969633b3c Jan 26 12:00:36 crc kubenswrapper[4619]: I0126 12:00:36.719403 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w267c/crc-debug-m7q7h" event={"ID":"48e00634-b1e2-4c5e-b067-beabe362c617","Type":"ContainerStarted","Data":"45c6659d9aa4f663e09f560aee5df62456cdc29051cb4e8ee5e8bdaeb76bfb03"} Jan 26 12:00:36 crc kubenswrapper[4619]: I0126 12:00:36.720053 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w267c/crc-debug-m7q7h" event={"ID":"48e00634-b1e2-4c5e-b067-beabe362c617","Type":"ContainerStarted","Data":"931532b20395d6968c5228e4912e42b134c87f9f88b6b42741545d5969633b3c"} Jan 26 12:00:36 crc kubenswrapper[4619]: I0126 12:00:36.740987 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-w267c/crc-debug-m7q7h" podStartSLOduration=1.7409643030000002 podStartE2EDuration="1.740964303s" podCreationTimestamp="2026-01-26 12:00:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:00:36.732357671 +0000 UTC m=+3935.766398387" watchObservedRunningTime="2026-01-26 12:00:36.740964303 +0000 UTC m=+3935.775005019" Jan 26 12:00:51 crc kubenswrapper[4619]: I0126 12:00:51.266476 4619 scope.go:117] "RemoveContainer" containerID="66eae0a5212e47155ed1c1f31470a8210b6beb2b56a28c3fcecfeb831bb1f5d4" Jan 26 12:00:51 crc kubenswrapper[4619]: E0126 12:00:51.269892 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 12:01:00 crc kubenswrapper[4619]: I0126 12:01:00.167263 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29490481-qvpcl"] Jan 26 12:01:00 crc kubenswrapper[4619]: I0126 12:01:00.169810 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490481-qvpcl" Jan 26 12:01:00 crc kubenswrapper[4619]: I0126 12:01:00.177473 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490481-qvpcl"] Jan 26 12:01:00 crc kubenswrapper[4619]: I0126 12:01:00.357813 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00-config-data\") pod \"keystone-cron-29490481-qvpcl\" (UID: \"8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00\") " pod="openstack/keystone-cron-29490481-qvpcl" Jan 26 12:01:00 crc kubenswrapper[4619]: I0126 12:01:00.357936 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00-fernet-keys\") pod \"keystone-cron-29490481-qvpcl\" (UID: \"8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00\") " pod="openstack/keystone-cron-29490481-qvpcl" Jan 26 12:01:00 crc kubenswrapper[4619]: I0126 12:01:00.357971 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00-combined-ca-bundle\") pod \"keystone-cron-29490481-qvpcl\" (UID: \"8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00\") " pod="openstack/keystone-cron-29490481-qvpcl" Jan 26 12:01:00 crc kubenswrapper[4619]: I0126 12:01:00.358018 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg57f\" (UniqueName: \"kubernetes.io/projected/8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00-kube-api-access-jg57f\") pod \"keystone-cron-29490481-qvpcl\" (UID: \"8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00\") " pod="openstack/keystone-cron-29490481-qvpcl" Jan 26 12:01:00 crc kubenswrapper[4619]: I0126 12:01:00.459744 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00-combined-ca-bundle\") pod \"keystone-cron-29490481-qvpcl\" (UID: \"8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00\") " pod="openstack/keystone-cron-29490481-qvpcl" Jan 26 12:01:00 crc kubenswrapper[4619]: I0126 12:01:00.459825 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg57f\" (UniqueName: \"kubernetes.io/projected/8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00-kube-api-access-jg57f\") pod \"keystone-cron-29490481-qvpcl\" (UID: \"8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00\") " pod="openstack/keystone-cron-29490481-qvpcl" Jan 26 12:01:00 crc kubenswrapper[4619]: I0126 12:01:00.459868 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00-config-data\") pod \"keystone-cron-29490481-qvpcl\" (UID: \"8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00\") " pod="openstack/keystone-cron-29490481-qvpcl" Jan 26 12:01:00 crc kubenswrapper[4619]: I0126 12:01:00.459961 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00-fernet-keys\") pod \"keystone-cron-29490481-qvpcl\" (UID: \"8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00\") " pod="openstack/keystone-cron-29490481-qvpcl" Jan 26 12:01:00 crc kubenswrapper[4619]: I0126 12:01:00.651596 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00-fernet-keys\") pod \"keystone-cron-29490481-qvpcl\" (UID: \"8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00\") " pod="openstack/keystone-cron-29490481-qvpcl" Jan 26 12:01:00 crc kubenswrapper[4619]: I0126 12:01:00.651852 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00-config-data\") pod \"keystone-cron-29490481-qvpcl\" (UID: \"8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00\") " pod="openstack/keystone-cron-29490481-qvpcl" Jan 26 12:01:00 crc kubenswrapper[4619]: I0126 12:01:00.653458 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg57f\" (UniqueName: \"kubernetes.io/projected/8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00-kube-api-access-jg57f\") pod \"keystone-cron-29490481-qvpcl\" (UID: \"8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00\") " pod="openstack/keystone-cron-29490481-qvpcl" Jan 26 12:01:00 crc kubenswrapper[4619]: I0126 12:01:00.653758 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00-combined-ca-bundle\") pod \"keystone-cron-29490481-qvpcl\" (UID: \"8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00\") " pod="openstack/keystone-cron-29490481-qvpcl" Jan 26 12:01:00 crc kubenswrapper[4619]: I0126 12:01:00.806000 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490481-qvpcl" Jan 26 12:01:01 crc kubenswrapper[4619]: I0126 12:01:01.389802 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490481-qvpcl"] Jan 26 12:01:01 crc kubenswrapper[4619]: I0126 12:01:01.928214 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490481-qvpcl" event={"ID":"8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00","Type":"ContainerStarted","Data":"6fe90fb9dfe93e3ff994735f423248b6099f756c96fd2bb5e748312fec6aff63"} Jan 26 12:01:01 crc kubenswrapper[4619]: I0126 12:01:01.928535 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490481-qvpcl" event={"ID":"8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00","Type":"ContainerStarted","Data":"1db1cd61e471b0fd13aade17f7a075e5fa4f6ced574b96037974b0010fdab1a6"} Jan 26 12:01:01 crc kubenswrapper[4619]: I0126 12:01:01.950368 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29490481-qvpcl" podStartSLOduration=1.950346116 podStartE2EDuration="1.950346116s" podCreationTimestamp="2026-01-26 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:01:01.945349911 +0000 UTC m=+3960.979390627" watchObservedRunningTime="2026-01-26 12:01:01.950346116 +0000 UTC m=+3960.984386832" Jan 26 12:01:02 crc kubenswrapper[4619]: I0126 12:01:02.261348 4619 scope.go:117] "RemoveContainer" containerID="66eae0a5212e47155ed1c1f31470a8210b6beb2b56a28c3fcecfeb831bb1f5d4" Jan 26 12:01:02 crc kubenswrapper[4619]: E0126 12:01:02.261579 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 12:01:05 crc kubenswrapper[4619]: I0126 12:01:05.958562 4619 generic.go:334] "Generic (PLEG): container finished" podID="8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00" containerID="6fe90fb9dfe93e3ff994735f423248b6099f756c96fd2bb5e748312fec6aff63" exitCode=0 Jan 26 12:01:05 crc kubenswrapper[4619]: I0126 12:01:05.958671 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490481-qvpcl" event={"ID":"8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00","Type":"ContainerDied","Data":"6fe90fb9dfe93e3ff994735f423248b6099f756c96fd2bb5e748312fec6aff63"} Jan 26 12:01:07 crc kubenswrapper[4619]: I0126 12:01:07.413216 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490481-qvpcl" Jan 26 12:01:07 crc kubenswrapper[4619]: I0126 12:01:07.599175 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jg57f\" (UniqueName: \"kubernetes.io/projected/8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00-kube-api-access-jg57f\") pod \"8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00\" (UID: \"8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00\") " Jan 26 12:01:07 crc kubenswrapper[4619]: I0126 12:01:07.599561 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00-fernet-keys\") pod \"8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00\" (UID: \"8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00\") " Jan 26 12:01:07 crc kubenswrapper[4619]: I0126 12:01:07.599590 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00-config-data\") pod \"8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00\" (UID: \"8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00\") " Jan 26 12:01:07 crc kubenswrapper[4619]: I0126 12:01:07.599689 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00-combined-ca-bundle\") pod \"8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00\" (UID: \"8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00\") " Jan 26 12:01:07 crc kubenswrapper[4619]: I0126 12:01:07.605497 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00-kube-api-access-jg57f" (OuterVolumeSpecName: "kube-api-access-jg57f") pod "8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00" (UID: "8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00"). InnerVolumeSpecName "kube-api-access-jg57f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:01:07 crc kubenswrapper[4619]: I0126 12:01:07.616950 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00" (UID: "8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:01:07 crc kubenswrapper[4619]: I0126 12:01:07.707085 4619 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 12:01:07 crc kubenswrapper[4619]: I0126 12:01:07.707118 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jg57f\" (UniqueName: \"kubernetes.io/projected/8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00-kube-api-access-jg57f\") on node \"crc\" DevicePath \"\"" Jan 26 12:01:07 crc kubenswrapper[4619]: I0126 12:01:07.721355 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00-config-data" (OuterVolumeSpecName: "config-data") pod "8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00" (UID: "8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:01:07 crc kubenswrapper[4619]: I0126 12:01:07.744025 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00" (UID: "8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:01:07 crc kubenswrapper[4619]: I0126 12:01:07.809128 4619 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 12:01:07 crc kubenswrapper[4619]: I0126 12:01:07.809325 4619 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 12:01:07 crc kubenswrapper[4619]: I0126 12:01:07.977663 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490481-qvpcl" event={"ID":"8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00","Type":"ContainerDied","Data":"1db1cd61e471b0fd13aade17f7a075e5fa4f6ced574b96037974b0010fdab1a6"} Jan 26 12:01:07 crc kubenswrapper[4619]: I0126 12:01:07.977711 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1db1cd61e471b0fd13aade17f7a075e5fa4f6ced574b96037974b0010fdab1a6" Jan 26 12:01:07 crc kubenswrapper[4619]: I0126 12:01:07.977958 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490481-qvpcl" Jan 26 12:01:13 crc kubenswrapper[4619]: I0126 12:01:13.265780 4619 scope.go:117] "RemoveContainer" containerID="66eae0a5212e47155ed1c1f31470a8210b6beb2b56a28c3fcecfeb831bb1f5d4" Jan 26 12:01:13 crc kubenswrapper[4619]: E0126 12:01:13.268052 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 12:01:17 crc kubenswrapper[4619]: I0126 12:01:17.079863 4619 generic.go:334] "Generic (PLEG): container finished" podID="48e00634-b1e2-4c5e-b067-beabe362c617" containerID="45c6659d9aa4f663e09f560aee5df62456cdc29051cb4e8ee5e8bdaeb76bfb03" exitCode=0 Jan 26 12:01:17 crc kubenswrapper[4619]: I0126 12:01:17.079954 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w267c/crc-debug-m7q7h" event={"ID":"48e00634-b1e2-4c5e-b067-beabe362c617","Type":"ContainerDied","Data":"45c6659d9aa4f663e09f560aee5df62456cdc29051cb4e8ee5e8bdaeb76bfb03"} Jan 26 12:01:18 crc kubenswrapper[4619]: I0126 12:01:18.211268 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w267c/crc-debug-m7q7h" Jan 26 12:01:18 crc kubenswrapper[4619]: I0126 12:01:18.258425 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-w267c/crc-debug-m7q7h"] Jan 26 12:01:18 crc kubenswrapper[4619]: I0126 12:01:18.269019 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-w267c/crc-debug-m7q7h"] Jan 26 12:01:18 crc kubenswrapper[4619]: I0126 12:01:18.294065 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpq6x\" (UniqueName: \"kubernetes.io/projected/48e00634-b1e2-4c5e-b067-beabe362c617-kube-api-access-fpq6x\") pod \"48e00634-b1e2-4c5e-b067-beabe362c617\" (UID: \"48e00634-b1e2-4c5e-b067-beabe362c617\") " Jan 26 12:01:18 crc kubenswrapper[4619]: I0126 12:01:18.294194 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/48e00634-b1e2-4c5e-b067-beabe362c617-host\") pod \"48e00634-b1e2-4c5e-b067-beabe362c617\" (UID: \"48e00634-b1e2-4c5e-b067-beabe362c617\") " Jan 26 12:01:18 crc kubenswrapper[4619]: I0126 12:01:18.294704 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48e00634-b1e2-4c5e-b067-beabe362c617-host" (OuterVolumeSpecName: "host") pod "48e00634-b1e2-4c5e-b067-beabe362c617" (UID: "48e00634-b1e2-4c5e-b067-beabe362c617"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:01:18 crc kubenswrapper[4619]: I0126 12:01:18.299243 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48e00634-b1e2-4c5e-b067-beabe362c617-kube-api-access-fpq6x" (OuterVolumeSpecName: "kube-api-access-fpq6x") pod "48e00634-b1e2-4c5e-b067-beabe362c617" (UID: "48e00634-b1e2-4c5e-b067-beabe362c617"). InnerVolumeSpecName "kube-api-access-fpq6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:01:18 crc kubenswrapper[4619]: I0126 12:01:18.396558 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpq6x\" (UniqueName: \"kubernetes.io/projected/48e00634-b1e2-4c5e-b067-beabe362c617-kube-api-access-fpq6x\") on node \"crc\" DevicePath \"\"" Jan 26 12:01:18 crc kubenswrapper[4619]: I0126 12:01:18.396593 4619 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/48e00634-b1e2-4c5e-b067-beabe362c617-host\") on node \"crc\" DevicePath \"\"" Jan 26 12:01:19 crc kubenswrapper[4619]: I0126 12:01:19.096652 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="931532b20395d6968c5228e4912e42b134c87f9f88b6b42741545d5969633b3c" Jan 26 12:01:19 crc kubenswrapper[4619]: I0126 12:01:19.096780 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w267c/crc-debug-m7q7h" Jan 26 12:01:19 crc kubenswrapper[4619]: I0126 12:01:19.272600 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48e00634-b1e2-4c5e-b067-beabe362c617" path="/var/lib/kubelet/pods/48e00634-b1e2-4c5e-b067-beabe362c617/volumes" Jan 26 12:01:19 crc kubenswrapper[4619]: I0126 12:01:19.522929 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-w267c/crc-debug-qg2p2"] Jan 26 12:01:19 crc kubenswrapper[4619]: E0126 12:01:19.523381 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00" containerName="keystone-cron" Jan 26 12:01:19 crc kubenswrapper[4619]: I0126 12:01:19.523405 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00" containerName="keystone-cron" Jan 26 12:01:19 crc kubenswrapper[4619]: E0126 12:01:19.523438 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48e00634-b1e2-4c5e-b067-beabe362c617" containerName="container-00" Jan 26 12:01:19 crc kubenswrapper[4619]: I0126 12:01:19.523447 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="48e00634-b1e2-4c5e-b067-beabe362c617" containerName="container-00" Jan 26 12:01:19 crc kubenswrapper[4619]: I0126 12:01:19.523654 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="48e00634-b1e2-4c5e-b067-beabe362c617" containerName="container-00" Jan 26 12:01:19 crc kubenswrapper[4619]: I0126 12:01:19.523673 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00" containerName="keystone-cron" Jan 26 12:01:19 crc kubenswrapper[4619]: I0126 12:01:19.524303 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w267c/crc-debug-qg2p2" Jan 26 12:01:19 crc kubenswrapper[4619]: I0126 12:01:19.526591 4619 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-w267c"/"default-dockercfg-zkm9s" Jan 26 12:01:19 crc kubenswrapper[4619]: I0126 12:01:19.617225 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d43dac15-5d44-4d3d-8f4d-ebf2eadf8267-host\") pod \"crc-debug-qg2p2\" (UID: \"d43dac15-5d44-4d3d-8f4d-ebf2eadf8267\") " pod="openshift-must-gather-w267c/crc-debug-qg2p2" Jan 26 12:01:19 crc kubenswrapper[4619]: I0126 12:01:19.617348 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wzdn\" (UniqueName: \"kubernetes.io/projected/d43dac15-5d44-4d3d-8f4d-ebf2eadf8267-kube-api-access-5wzdn\") pod \"crc-debug-qg2p2\" (UID: \"d43dac15-5d44-4d3d-8f4d-ebf2eadf8267\") " pod="openshift-must-gather-w267c/crc-debug-qg2p2" Jan 26 12:01:19 crc kubenswrapper[4619]: I0126 12:01:19.719376 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d43dac15-5d44-4d3d-8f4d-ebf2eadf8267-host\") pod \"crc-debug-qg2p2\" (UID: \"d43dac15-5d44-4d3d-8f4d-ebf2eadf8267\") " pod="openshift-must-gather-w267c/crc-debug-qg2p2" Jan 26 12:01:19 crc kubenswrapper[4619]: I0126 12:01:19.719487 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d43dac15-5d44-4d3d-8f4d-ebf2eadf8267-host\") pod \"crc-debug-qg2p2\" (UID: \"d43dac15-5d44-4d3d-8f4d-ebf2eadf8267\") " pod="openshift-must-gather-w267c/crc-debug-qg2p2" Jan 26 12:01:19 crc kubenswrapper[4619]: I0126 12:01:19.719750 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wzdn\" (UniqueName: \"kubernetes.io/projected/d43dac15-5d44-4d3d-8f4d-ebf2eadf8267-kube-api-access-5wzdn\") pod \"crc-debug-qg2p2\" (UID: \"d43dac15-5d44-4d3d-8f4d-ebf2eadf8267\") " pod="openshift-must-gather-w267c/crc-debug-qg2p2" Jan 26 12:01:19 crc kubenswrapper[4619]: I0126 12:01:19.739255 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wzdn\" (UniqueName: \"kubernetes.io/projected/d43dac15-5d44-4d3d-8f4d-ebf2eadf8267-kube-api-access-5wzdn\") pod \"crc-debug-qg2p2\" (UID: \"d43dac15-5d44-4d3d-8f4d-ebf2eadf8267\") " pod="openshift-must-gather-w267c/crc-debug-qg2p2" Jan 26 12:01:19 crc kubenswrapper[4619]: I0126 12:01:19.840522 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w267c/crc-debug-qg2p2" Jan 26 12:01:20 crc kubenswrapper[4619]: I0126 12:01:20.122681 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w267c/crc-debug-qg2p2" event={"ID":"d43dac15-5d44-4d3d-8f4d-ebf2eadf8267","Type":"ContainerStarted","Data":"f343a5ab095b2ef7f178fa553991ae309b84f90ad3751c2af2ddc0428ac97707"} Jan 26 12:01:20 crc kubenswrapper[4619]: I0126 12:01:20.840777 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-drddh"] Jan 26 12:01:20 crc kubenswrapper[4619]: I0126 12:01:20.842512 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-drddh" Jan 26 12:01:20 crc kubenswrapper[4619]: I0126 12:01:20.865717 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-drddh"] Jan 26 12:01:20 crc kubenswrapper[4619]: I0126 12:01:20.948812 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6859f47d-3514-4db0-b900-8009ad21b665-utilities\") pod \"community-operators-drddh\" (UID: \"6859f47d-3514-4db0-b900-8009ad21b665\") " pod="openshift-marketplace/community-operators-drddh" Jan 26 12:01:20 crc kubenswrapper[4619]: I0126 12:01:20.948877 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngt4k\" (UniqueName: \"kubernetes.io/projected/6859f47d-3514-4db0-b900-8009ad21b665-kube-api-access-ngt4k\") pod \"community-operators-drddh\" (UID: \"6859f47d-3514-4db0-b900-8009ad21b665\") " pod="openshift-marketplace/community-operators-drddh" Jan 26 12:01:20 crc kubenswrapper[4619]: I0126 12:01:20.948918 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6859f47d-3514-4db0-b900-8009ad21b665-catalog-content\") pod \"community-operators-drddh\" (UID: \"6859f47d-3514-4db0-b900-8009ad21b665\") " pod="openshift-marketplace/community-operators-drddh" Jan 26 12:01:21 crc kubenswrapper[4619]: I0126 12:01:21.050783 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6859f47d-3514-4db0-b900-8009ad21b665-utilities\") pod \"community-operators-drddh\" (UID: \"6859f47d-3514-4db0-b900-8009ad21b665\") " pod="openshift-marketplace/community-operators-drddh" Jan 26 12:01:21 crc kubenswrapper[4619]: I0126 12:01:21.051127 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngt4k\" (UniqueName: \"kubernetes.io/projected/6859f47d-3514-4db0-b900-8009ad21b665-kube-api-access-ngt4k\") pod \"community-operators-drddh\" (UID: \"6859f47d-3514-4db0-b900-8009ad21b665\") " pod="openshift-marketplace/community-operators-drddh" Jan 26 12:01:21 crc kubenswrapper[4619]: I0126 12:01:21.051171 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6859f47d-3514-4db0-b900-8009ad21b665-catalog-content\") pod \"community-operators-drddh\" (UID: \"6859f47d-3514-4db0-b900-8009ad21b665\") " pod="openshift-marketplace/community-operators-drddh" Jan 26 12:01:21 crc kubenswrapper[4619]: I0126 12:01:21.051763 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6859f47d-3514-4db0-b900-8009ad21b665-catalog-content\") pod \"community-operators-drddh\" (UID: \"6859f47d-3514-4db0-b900-8009ad21b665\") " pod="openshift-marketplace/community-operators-drddh" Jan 26 12:01:21 crc kubenswrapper[4619]: I0126 12:01:21.051969 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6859f47d-3514-4db0-b900-8009ad21b665-utilities\") pod \"community-operators-drddh\" (UID: \"6859f47d-3514-4db0-b900-8009ad21b665\") " pod="openshift-marketplace/community-operators-drddh" Jan 26 12:01:21 crc kubenswrapper[4619]: I0126 12:01:21.073318 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngt4k\" (UniqueName: \"kubernetes.io/projected/6859f47d-3514-4db0-b900-8009ad21b665-kube-api-access-ngt4k\") pod \"community-operators-drddh\" (UID: \"6859f47d-3514-4db0-b900-8009ad21b665\") " pod="openshift-marketplace/community-operators-drddh" Jan 26 12:01:21 crc kubenswrapper[4619]: I0126 12:01:21.131336 4619 generic.go:334] "Generic (PLEG): container finished" podID="d43dac15-5d44-4d3d-8f4d-ebf2eadf8267" containerID="2e514854a73ed04a41ffbfeaf4f168b252d5b2cbbafacab66f5892cc1139e6c0" exitCode=0 Jan 26 12:01:21 crc kubenswrapper[4619]: I0126 12:01:21.131375 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w267c/crc-debug-qg2p2" event={"ID":"d43dac15-5d44-4d3d-8f4d-ebf2eadf8267","Type":"ContainerDied","Data":"2e514854a73ed04a41ffbfeaf4f168b252d5b2cbbafacab66f5892cc1139e6c0"} Jan 26 12:01:21 crc kubenswrapper[4619]: I0126 12:01:21.170001 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-drddh" Jan 26 12:01:21 crc kubenswrapper[4619]: I0126 12:01:21.829069 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-w267c/crc-debug-qg2p2"] Jan 26 12:01:21 crc kubenswrapper[4619]: I0126 12:01:21.846245 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-w267c/crc-debug-qg2p2"] Jan 26 12:01:21 crc kubenswrapper[4619]: I0126 12:01:21.877349 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-drddh"] Jan 26 12:01:22 crc kubenswrapper[4619]: I0126 12:01:22.140449 4619 generic.go:334] "Generic (PLEG): container finished" podID="6859f47d-3514-4db0-b900-8009ad21b665" containerID="886693752aadc37fcb14a5788c7137a452f0618644ff88caf40c2eeadd54aca4" exitCode=0 Jan 26 12:01:22 crc kubenswrapper[4619]: I0126 12:01:22.140559 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-drddh" event={"ID":"6859f47d-3514-4db0-b900-8009ad21b665","Type":"ContainerDied","Data":"886693752aadc37fcb14a5788c7137a452f0618644ff88caf40c2eeadd54aca4"} Jan 26 12:01:22 crc kubenswrapper[4619]: I0126 12:01:22.140915 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-drddh" event={"ID":"6859f47d-3514-4db0-b900-8009ad21b665","Type":"ContainerStarted","Data":"c43a787e4a61440b9ea6535295ede1a1ab109ccaff85ce64f9cec7cdefdaea37"} Jan 26 12:01:22 crc kubenswrapper[4619]: I0126 12:01:22.143816 4619 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 12:01:22 crc kubenswrapper[4619]: I0126 12:01:22.234272 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w267c/crc-debug-qg2p2" Jan 26 12:01:22 crc kubenswrapper[4619]: I0126 12:01:22.390647 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wzdn\" (UniqueName: \"kubernetes.io/projected/d43dac15-5d44-4d3d-8f4d-ebf2eadf8267-kube-api-access-5wzdn\") pod \"d43dac15-5d44-4d3d-8f4d-ebf2eadf8267\" (UID: \"d43dac15-5d44-4d3d-8f4d-ebf2eadf8267\") " Jan 26 12:01:22 crc kubenswrapper[4619]: I0126 12:01:22.390806 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d43dac15-5d44-4d3d-8f4d-ebf2eadf8267-host\") pod \"d43dac15-5d44-4d3d-8f4d-ebf2eadf8267\" (UID: \"d43dac15-5d44-4d3d-8f4d-ebf2eadf8267\") " Jan 26 12:01:22 crc kubenswrapper[4619]: I0126 12:01:22.391003 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d43dac15-5d44-4d3d-8f4d-ebf2eadf8267-host" (OuterVolumeSpecName: "host") pod "d43dac15-5d44-4d3d-8f4d-ebf2eadf8267" (UID: "d43dac15-5d44-4d3d-8f4d-ebf2eadf8267"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:01:22 crc kubenswrapper[4619]: I0126 12:01:22.391466 4619 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d43dac15-5d44-4d3d-8f4d-ebf2eadf8267-host\") on node \"crc\" DevicePath \"\"" Jan 26 12:01:22 crc kubenswrapper[4619]: I0126 12:01:22.395654 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d43dac15-5d44-4d3d-8f4d-ebf2eadf8267-kube-api-access-5wzdn" (OuterVolumeSpecName: "kube-api-access-5wzdn") pod "d43dac15-5d44-4d3d-8f4d-ebf2eadf8267" (UID: "d43dac15-5d44-4d3d-8f4d-ebf2eadf8267"). InnerVolumeSpecName "kube-api-access-5wzdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:01:22 crc kubenswrapper[4619]: I0126 12:01:22.493450 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wzdn\" (UniqueName: \"kubernetes.io/projected/d43dac15-5d44-4d3d-8f4d-ebf2eadf8267-kube-api-access-5wzdn\") on node \"crc\" DevicePath \"\"" Jan 26 12:01:23 crc kubenswrapper[4619]: I0126 12:01:23.110262 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-w267c/crc-debug-nmh8s"] Jan 26 12:01:23 crc kubenswrapper[4619]: E0126 12:01:23.111166 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d43dac15-5d44-4d3d-8f4d-ebf2eadf8267" containerName="container-00" Jan 26 12:01:23 crc kubenswrapper[4619]: I0126 12:01:23.111188 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="d43dac15-5d44-4d3d-8f4d-ebf2eadf8267" containerName="container-00" Jan 26 12:01:23 crc kubenswrapper[4619]: I0126 12:01:23.111375 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="d43dac15-5d44-4d3d-8f4d-ebf2eadf8267" containerName="container-00" Jan 26 12:01:23 crc kubenswrapper[4619]: I0126 12:01:23.112124 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w267c/crc-debug-nmh8s" Jan 26 12:01:23 crc kubenswrapper[4619]: I0126 12:01:23.150607 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-drddh" event={"ID":"6859f47d-3514-4db0-b900-8009ad21b665","Type":"ContainerStarted","Data":"0c85605a7128aadb6520b2d7ca1d6b155ee23b6905678c06f778b585b3bab124"} Jan 26 12:01:23 crc kubenswrapper[4619]: I0126 12:01:23.152500 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f343a5ab095b2ef7f178fa553991ae309b84f90ad3751c2af2ddc0428ac97707" Jan 26 12:01:23 crc kubenswrapper[4619]: I0126 12:01:23.152556 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w267c/crc-debug-qg2p2" Jan 26 12:01:23 crc kubenswrapper[4619]: I0126 12:01:23.207660 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-849tr\" (UniqueName: \"kubernetes.io/projected/13789a85-58a0-432f-97ec-47574b6a9f67-kube-api-access-849tr\") pod \"crc-debug-nmh8s\" (UID: \"13789a85-58a0-432f-97ec-47574b6a9f67\") " pod="openshift-must-gather-w267c/crc-debug-nmh8s" Jan 26 12:01:23 crc kubenswrapper[4619]: I0126 12:01:23.207995 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13789a85-58a0-432f-97ec-47574b6a9f67-host\") pod \"crc-debug-nmh8s\" (UID: \"13789a85-58a0-432f-97ec-47574b6a9f67\") " pod="openshift-must-gather-w267c/crc-debug-nmh8s" Jan 26 12:01:23 crc kubenswrapper[4619]: I0126 12:01:23.272474 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d43dac15-5d44-4d3d-8f4d-ebf2eadf8267" path="/var/lib/kubelet/pods/d43dac15-5d44-4d3d-8f4d-ebf2eadf8267/volumes" Jan 26 12:01:23 crc kubenswrapper[4619]: I0126 12:01:23.310504 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-849tr\" (UniqueName: \"kubernetes.io/projected/13789a85-58a0-432f-97ec-47574b6a9f67-kube-api-access-849tr\") pod \"crc-debug-nmh8s\" (UID: \"13789a85-58a0-432f-97ec-47574b6a9f67\") " pod="openshift-must-gather-w267c/crc-debug-nmh8s" Jan 26 12:01:23 crc kubenswrapper[4619]: I0126 12:01:23.310547 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13789a85-58a0-432f-97ec-47574b6a9f67-host\") pod \"crc-debug-nmh8s\" (UID: \"13789a85-58a0-432f-97ec-47574b6a9f67\") " pod="openshift-must-gather-w267c/crc-debug-nmh8s" Jan 26 12:01:23 crc kubenswrapper[4619]: I0126 12:01:23.310747 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13789a85-58a0-432f-97ec-47574b6a9f67-host\") pod \"crc-debug-nmh8s\" (UID: \"13789a85-58a0-432f-97ec-47574b6a9f67\") " pod="openshift-must-gather-w267c/crc-debug-nmh8s" Jan 26 12:01:23 crc kubenswrapper[4619]: I0126 12:01:23.340858 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-849tr\" (UniqueName: \"kubernetes.io/projected/13789a85-58a0-432f-97ec-47574b6a9f67-kube-api-access-849tr\") pod \"crc-debug-nmh8s\" (UID: \"13789a85-58a0-432f-97ec-47574b6a9f67\") " pod="openshift-must-gather-w267c/crc-debug-nmh8s" Jan 26 12:01:23 crc kubenswrapper[4619]: I0126 12:01:23.425608 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w267c/crc-debug-nmh8s" Jan 26 12:01:23 crc kubenswrapper[4619]: W0126 12:01:23.461337 4619 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13789a85_58a0_432f_97ec_47574b6a9f67.slice/crio-d851af8fe0275de17b28543b36922137a00296d3ae9e90cd614403f4a5a39e3d WatchSource:0}: Error finding container d851af8fe0275de17b28543b36922137a00296d3ae9e90cd614403f4a5a39e3d: Status 404 returned error can't find the container with id d851af8fe0275de17b28543b36922137a00296d3ae9e90cd614403f4a5a39e3d Jan 26 12:01:24 crc kubenswrapper[4619]: I0126 12:01:24.161361 4619 generic.go:334] "Generic (PLEG): container finished" podID="13789a85-58a0-432f-97ec-47574b6a9f67" containerID="a69c0480ee9ff91df001bd04ec4dc4e1f3ba280670d87e80d70840a5cc572c81" exitCode=0 Jan 26 12:01:24 crc kubenswrapper[4619]: I0126 12:01:24.161441 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w267c/crc-debug-nmh8s" event={"ID":"13789a85-58a0-432f-97ec-47574b6a9f67","Type":"ContainerDied","Data":"a69c0480ee9ff91df001bd04ec4dc4e1f3ba280670d87e80d70840a5cc572c81"} Jan 26 12:01:24 crc kubenswrapper[4619]: I0126 12:01:24.161752 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w267c/crc-debug-nmh8s" event={"ID":"13789a85-58a0-432f-97ec-47574b6a9f67","Type":"ContainerStarted","Data":"d851af8fe0275de17b28543b36922137a00296d3ae9e90cd614403f4a5a39e3d"} Jan 26 12:01:24 crc kubenswrapper[4619]: I0126 12:01:24.164108 4619 generic.go:334] "Generic (PLEG): container finished" podID="6859f47d-3514-4db0-b900-8009ad21b665" containerID="0c85605a7128aadb6520b2d7ca1d6b155ee23b6905678c06f778b585b3bab124" exitCode=0 Jan 26 12:01:24 crc kubenswrapper[4619]: I0126 12:01:24.164155 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-drddh" event={"ID":"6859f47d-3514-4db0-b900-8009ad21b665","Type":"ContainerDied","Data":"0c85605a7128aadb6520b2d7ca1d6b155ee23b6905678c06f778b585b3bab124"} Jan 26 12:01:24 crc kubenswrapper[4619]: I0126 12:01:24.203857 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-w267c/crc-debug-nmh8s"] Jan 26 12:01:24 crc kubenswrapper[4619]: I0126 12:01:24.226036 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-w267c/crc-debug-nmh8s"] Jan 26 12:01:25 crc kubenswrapper[4619]: I0126 12:01:25.194722 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-drddh" event={"ID":"6859f47d-3514-4db0-b900-8009ad21b665","Type":"ContainerStarted","Data":"534f56e3e5f6fd22e8f8c7199cfa043d14019d4aff9a941f184f3ff28fa055dc"} Jan 26 12:01:25 crc kubenswrapper[4619]: I0126 12:01:25.279557 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-drddh" podStartSLOduration=2.85041833 podStartE2EDuration="5.279539534s" podCreationTimestamp="2026-01-26 12:01:20 +0000 UTC" firstStartedPulling="2026-01-26 12:01:22.143535531 +0000 UTC m=+3981.177576247" lastFinishedPulling="2026-01-26 12:01:24.572656735 +0000 UTC m=+3983.606697451" observedRunningTime="2026-01-26 12:01:25.268464182 +0000 UTC m=+3984.302504898" watchObservedRunningTime="2026-01-26 12:01:25.279539534 +0000 UTC m=+3984.313580250" Jan 26 12:01:25 crc kubenswrapper[4619]: I0126 12:01:25.304868 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w267c/crc-debug-nmh8s" Jan 26 12:01:25 crc kubenswrapper[4619]: I0126 12:01:25.472677 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-849tr\" (UniqueName: \"kubernetes.io/projected/13789a85-58a0-432f-97ec-47574b6a9f67-kube-api-access-849tr\") pod \"13789a85-58a0-432f-97ec-47574b6a9f67\" (UID: \"13789a85-58a0-432f-97ec-47574b6a9f67\") " Jan 26 12:01:25 crc kubenswrapper[4619]: I0126 12:01:25.473117 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13789a85-58a0-432f-97ec-47574b6a9f67-host\") pod \"13789a85-58a0-432f-97ec-47574b6a9f67\" (UID: \"13789a85-58a0-432f-97ec-47574b6a9f67\") " Jan 26 12:01:25 crc kubenswrapper[4619]: I0126 12:01:25.473710 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13789a85-58a0-432f-97ec-47574b6a9f67-host" (OuterVolumeSpecName: "host") pod "13789a85-58a0-432f-97ec-47574b6a9f67" (UID: "13789a85-58a0-432f-97ec-47574b6a9f67"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:01:25 crc kubenswrapper[4619]: I0126 12:01:25.486427 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13789a85-58a0-432f-97ec-47574b6a9f67-kube-api-access-849tr" (OuterVolumeSpecName: "kube-api-access-849tr") pod "13789a85-58a0-432f-97ec-47574b6a9f67" (UID: "13789a85-58a0-432f-97ec-47574b6a9f67"). InnerVolumeSpecName "kube-api-access-849tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:01:25 crc kubenswrapper[4619]: I0126 12:01:25.575604 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-849tr\" (UniqueName: \"kubernetes.io/projected/13789a85-58a0-432f-97ec-47574b6a9f67-kube-api-access-849tr\") on node \"crc\" DevicePath \"\"" Jan 26 12:01:25 crc kubenswrapper[4619]: I0126 12:01:25.575660 4619 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13789a85-58a0-432f-97ec-47574b6a9f67-host\") on node \"crc\" DevicePath \"\"" Jan 26 12:01:26 crc kubenswrapper[4619]: I0126 12:01:26.204134 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w267c/crc-debug-nmh8s" Jan 26 12:01:26 crc kubenswrapper[4619]: I0126 12:01:26.204151 4619 scope.go:117] "RemoveContainer" containerID="a69c0480ee9ff91df001bd04ec4dc4e1f3ba280670d87e80d70840a5cc572c81" Jan 26 12:01:26 crc kubenswrapper[4619]: I0126 12:01:26.261856 4619 scope.go:117] "RemoveContainer" containerID="66eae0a5212e47155ed1c1f31470a8210b6beb2b56a28c3fcecfeb831bb1f5d4" Jan 26 12:01:27 crc kubenswrapper[4619]: I0126 12:01:27.218943 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerStarted","Data":"64350cc60f5e403d6072c0e2fa04966874fe67e1aef5a089878bd6d50d70e8ec"} Jan 26 12:01:27 crc kubenswrapper[4619]: I0126 12:01:27.273734 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13789a85-58a0-432f-97ec-47574b6a9f67" path="/var/lib/kubelet/pods/13789a85-58a0-432f-97ec-47574b6a9f67/volumes" Jan 26 12:01:31 crc kubenswrapper[4619]: I0126 12:01:31.171495 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-drddh" Jan 26 12:01:31 crc kubenswrapper[4619]: I0126 12:01:31.173107 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-drddh" Jan 26 12:01:31 crc kubenswrapper[4619]: I0126 12:01:31.228356 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-drddh" Jan 26 12:01:31 crc kubenswrapper[4619]: I0126 12:01:31.305321 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-drddh" Jan 26 12:01:35 crc kubenswrapper[4619]: I0126 12:01:35.012294 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-drddh"] Jan 26 12:01:35 crc kubenswrapper[4619]: I0126 12:01:35.012922 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-drddh" podUID="6859f47d-3514-4db0-b900-8009ad21b665" containerName="registry-server" containerID="cri-o://534f56e3e5f6fd22e8f8c7199cfa043d14019d4aff9a941f184f3ff28fa055dc" gracePeriod=2 Jan 26 12:01:35 crc kubenswrapper[4619]: I0126 12:01:35.300288 4619 generic.go:334] "Generic (PLEG): container finished" podID="6859f47d-3514-4db0-b900-8009ad21b665" containerID="534f56e3e5f6fd22e8f8c7199cfa043d14019d4aff9a941f184f3ff28fa055dc" exitCode=0 Jan 26 12:01:35 crc kubenswrapper[4619]: I0126 12:01:35.300374 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-drddh" event={"ID":"6859f47d-3514-4db0-b900-8009ad21b665","Type":"ContainerDied","Data":"534f56e3e5f6fd22e8f8c7199cfa043d14019d4aff9a941f184f3ff28fa055dc"} Jan 26 12:01:35 crc kubenswrapper[4619]: I0126 12:01:35.790384 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-drddh" Jan 26 12:01:35 crc kubenswrapper[4619]: I0126 12:01:35.975744 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6859f47d-3514-4db0-b900-8009ad21b665-catalog-content\") pod \"6859f47d-3514-4db0-b900-8009ad21b665\" (UID: \"6859f47d-3514-4db0-b900-8009ad21b665\") " Jan 26 12:01:35 crc kubenswrapper[4619]: I0126 12:01:35.975826 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6859f47d-3514-4db0-b900-8009ad21b665-utilities\") pod \"6859f47d-3514-4db0-b900-8009ad21b665\" (UID: \"6859f47d-3514-4db0-b900-8009ad21b665\") " Jan 26 12:01:35 crc kubenswrapper[4619]: I0126 12:01:35.976189 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngt4k\" (UniqueName: \"kubernetes.io/projected/6859f47d-3514-4db0-b900-8009ad21b665-kube-api-access-ngt4k\") pod \"6859f47d-3514-4db0-b900-8009ad21b665\" (UID: \"6859f47d-3514-4db0-b900-8009ad21b665\") " Jan 26 12:01:35 crc kubenswrapper[4619]: I0126 12:01:35.976782 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6859f47d-3514-4db0-b900-8009ad21b665-utilities" (OuterVolumeSpecName: "utilities") pod "6859f47d-3514-4db0-b900-8009ad21b665" (UID: "6859f47d-3514-4db0-b900-8009ad21b665"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:01:35 crc kubenswrapper[4619]: I0126 12:01:35.978066 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6859f47d-3514-4db0-b900-8009ad21b665-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:01:35 crc kubenswrapper[4619]: I0126 12:01:35.983200 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6859f47d-3514-4db0-b900-8009ad21b665-kube-api-access-ngt4k" (OuterVolumeSpecName: "kube-api-access-ngt4k") pod "6859f47d-3514-4db0-b900-8009ad21b665" (UID: "6859f47d-3514-4db0-b900-8009ad21b665"). InnerVolumeSpecName "kube-api-access-ngt4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:01:36 crc kubenswrapper[4619]: I0126 12:01:36.033834 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6859f47d-3514-4db0-b900-8009ad21b665-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6859f47d-3514-4db0-b900-8009ad21b665" (UID: "6859f47d-3514-4db0-b900-8009ad21b665"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:01:36 crc kubenswrapper[4619]: I0126 12:01:36.079887 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngt4k\" (UniqueName: \"kubernetes.io/projected/6859f47d-3514-4db0-b900-8009ad21b665-kube-api-access-ngt4k\") on node \"crc\" DevicePath \"\"" Jan 26 12:01:36 crc kubenswrapper[4619]: I0126 12:01:36.079932 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6859f47d-3514-4db0-b900-8009ad21b665-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:01:36 crc kubenswrapper[4619]: I0126 12:01:36.310080 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-drddh" event={"ID":"6859f47d-3514-4db0-b900-8009ad21b665","Type":"ContainerDied","Data":"c43a787e4a61440b9ea6535295ede1a1ab109ccaff85ce64f9cec7cdefdaea37"} Jan 26 12:01:36 crc kubenswrapper[4619]: I0126 12:01:36.310419 4619 scope.go:117] "RemoveContainer" containerID="534f56e3e5f6fd22e8f8c7199cfa043d14019d4aff9a941f184f3ff28fa055dc" Jan 26 12:01:36 crc kubenswrapper[4619]: I0126 12:01:36.310566 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-drddh" Jan 26 12:01:36 crc kubenswrapper[4619]: I0126 12:01:36.329169 4619 scope.go:117] "RemoveContainer" containerID="0c85605a7128aadb6520b2d7ca1d6b155ee23b6905678c06f778b585b3bab124" Jan 26 12:01:36 crc kubenswrapper[4619]: I0126 12:01:36.353679 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-drddh"] Jan 26 12:01:36 crc kubenswrapper[4619]: I0126 12:01:36.362329 4619 scope.go:117] "RemoveContainer" containerID="886693752aadc37fcb14a5788c7137a452f0618644ff88caf40c2eeadd54aca4" Jan 26 12:01:36 crc kubenswrapper[4619]: I0126 12:01:36.369176 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-drddh"] Jan 26 12:01:37 crc kubenswrapper[4619]: I0126 12:01:37.271697 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6859f47d-3514-4db0-b900-8009ad21b665" path="/var/lib/kubelet/pods/6859f47d-3514-4db0-b900-8009ad21b665/volumes" Jan 26 12:02:14 crc kubenswrapper[4619]: I0126 12:02:14.291635 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-668f5b9c84-9qvth_16edc018-6152-42d9-aa2d-70de2c9851f3/barbican-api/0.log" Jan 26 12:02:14 crc kubenswrapper[4619]: I0126 12:02:14.466274 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5cd59c79cd-lqtz6_827c156d-633b-414a-93ef-07d73ba79785/barbican-keystone-listener/0.log" Jan 26 12:02:14 crc kubenswrapper[4619]: I0126 12:02:14.792575 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5cd59c79cd-lqtz6_827c156d-633b-414a-93ef-07d73ba79785/barbican-keystone-listener-log/0.log" Jan 26 12:02:14 crc kubenswrapper[4619]: I0126 12:02:14.823907 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5b76d57d79-c2tm5_cd4a2072-c71c-42f6-940e-35435fc350c7/barbican-worker/0.log" Jan 26 12:02:14 crc kubenswrapper[4619]: I0126 12:02:14.996296 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-668f5b9c84-9qvth_16edc018-6152-42d9-aa2d-70de2c9851f3/barbican-api-log/0.log" Jan 26 12:02:15 crc kubenswrapper[4619]: I0126 12:02:15.017754 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5b76d57d79-c2tm5_cd4a2072-c71c-42f6-940e-35435fc350c7/barbican-worker-log/0.log" Jan 26 12:02:15 crc kubenswrapper[4619]: I0126 12:02:15.156062 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-wfnzr_12059c45-fc17-45cc-a061-a1b5ea704285/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 12:02:15 crc kubenswrapper[4619]: I0126 12:02:15.352716 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c931ebbd-0d84-4a52-9672-d62698618f7f/ceilometer-central-agent/0.log" Jan 26 12:02:15 crc kubenswrapper[4619]: I0126 12:02:15.381299 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c931ebbd-0d84-4a52-9672-d62698618f7f/proxy-httpd/0.log" Jan 26 12:02:15 crc kubenswrapper[4619]: I0126 12:02:15.410061 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c931ebbd-0d84-4a52-9672-d62698618f7f/ceilometer-notification-agent/0.log" Jan 26 12:02:15 crc kubenswrapper[4619]: I0126 12:02:15.538398 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c931ebbd-0d84-4a52-9672-d62698618f7f/sg-core/0.log" Jan 26 12:02:15 crc kubenswrapper[4619]: I0126 12:02:15.623030 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_18a1159e-53c5-4f13-9b4d-c6912b11fe46/cinder-api-log/0.log" Jan 26 12:02:15 crc kubenswrapper[4619]: I0126 12:02:15.641627 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_18a1159e-53c5-4f13-9b4d-c6912b11fe46/cinder-api/0.log" Jan 26 12:02:15 crc kubenswrapper[4619]: I0126 12:02:15.871357 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_519b14a3-af8d-4238-9bc0-69e13bae0a9e/probe/0.log" Jan 26 12:02:15 crc kubenswrapper[4619]: I0126 12:02:15.892100 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_519b14a3-af8d-4238-9bc0-69e13bae0a9e/cinder-scheduler/0.log" Jan 26 12:02:16 crc kubenswrapper[4619]: I0126 12:02:16.118683 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-cxxhb_352a4117-3bba-4714-a367-916874cba86f/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 12:02:16 crc kubenswrapper[4619]: I0126 12:02:16.157414 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-858f8_51653ef8-78e6-4e44-9391-e815c9d092bf/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 12:02:16 crc kubenswrapper[4619]: I0126 12:02:16.343061 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6cd9bffc9-fgwpp_1e69603e-4c04-4273-8c8c-b71255c1f370/init/0.log" Jan 26 12:02:16 crc kubenswrapper[4619]: I0126 12:02:16.563362 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6cd9bffc9-fgwpp_1e69603e-4c04-4273-8c8c-b71255c1f370/init/0.log" Jan 26 12:02:16 crc kubenswrapper[4619]: I0126 12:02:16.578382 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-p6c7l_ac31ccc2-07ae-4326-80dd-12b4e8393331/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 12:02:16 crc kubenswrapper[4619]: I0126 12:02:16.645487 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6cd9bffc9-fgwpp_1e69603e-4c04-4273-8c8c-b71255c1f370/dnsmasq-dns/0.log" Jan 26 12:02:16 crc kubenswrapper[4619]: I0126 12:02:16.800200 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_27d84c05-55fb-4f3a-a363-aa137f111de7/glance-httpd/0.log" Jan 26 12:02:16 crc kubenswrapper[4619]: I0126 12:02:16.871481 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_27d84c05-55fb-4f3a-a363-aa137f111de7/glance-log/0.log" Jan 26 12:02:17 crc kubenswrapper[4619]: I0126 12:02:17.018943 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153/glance-httpd/0.log" Jan 26 12:02:17 crc kubenswrapper[4619]: I0126 12:02:17.041794 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9f50a1e8-fb78-41d6-8ba3-4c1c9f66d153/glance-log/0.log" Jan 26 12:02:17 crc kubenswrapper[4619]: I0126 12:02:17.287783 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-846d64d6c4-66jvl_10c8ed10-dab5-49e5-a030-4be99c720ae0/horizon/1.log" Jan 26 12:02:17 crc kubenswrapper[4619]: I0126 12:02:17.353603 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-846d64d6c4-66jvl_10c8ed10-dab5-49e5-a030-4be99c720ae0/horizon/0.log" Jan 26 12:02:17 crc kubenswrapper[4619]: I0126 12:02:17.682594 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-pnwv4_09e6155b-11d0-4cab-83c5-c215bac7c5d8/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 12:02:17 crc kubenswrapper[4619]: I0126 12:02:17.754045 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-846d64d6c4-66jvl_10c8ed10-dab5-49e5-a030-4be99c720ae0/horizon-log/0.log" Jan 26 12:02:18 crc kubenswrapper[4619]: I0126 12:02:18.107643 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-c4v2v_e0c6e648-dad5-48ab-8eb3-0e40a9225e9e/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 12:02:18 crc kubenswrapper[4619]: I0126 12:02:18.225418 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-75ddf854f7-wtpq9_29fb1b8f-bf8f-456d-8e56-8fded6d074a1/keystone-api/0.log" Jan 26 12:02:18 crc kubenswrapper[4619]: I0126 12:02:18.296710 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29490481-qvpcl_8b07fc2e-e38d-435f-95c2-fe7bf1ad6d00/keystone-cron/0.log" Jan 26 12:02:18 crc kubenswrapper[4619]: I0126 12:02:18.436985 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_8f4bc98f-79c3-4192-973d-32d8df967077/kube-state-metrics/0.log" Jan 26 12:02:18 crc kubenswrapper[4619]: I0126 12:02:18.591187 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-h97ld_eaa2c414-823b-48a9-a59d-1f02d1708f9f/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 12:02:19 crc kubenswrapper[4619]: I0126 12:02:19.016324 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-54d868dd9-v7bwm_f5841244-b607-41b5-981c-1bb78b997411/neutron-api/0.log" Jan 26 12:02:19 crc kubenswrapper[4619]: I0126 12:02:19.087372 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-54d868dd9-v7bwm_f5841244-b607-41b5-981c-1bb78b997411/neutron-httpd/0.log" Jan 26 12:02:19 crc kubenswrapper[4619]: I0126 12:02:19.363282 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-9c2j6_5f530175-ddda-4a1c-a437-af3747bb0da9/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 12:02:19 crc kubenswrapper[4619]: I0126 12:02:19.579783 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a99ba972-f513-421c-b25d-c8ecbc095c0f/nova-api-log/0.log" Jan 26 12:02:19 crc kubenswrapper[4619]: I0126 12:02:19.948488 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_50392ffb-8c95-4c47-97e9-03d27141e8e8/nova-cell0-conductor-conductor/0.log" Jan 26 12:02:20 crc kubenswrapper[4619]: I0126 12:02:20.160886 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a99ba972-f513-421c-b25d-c8ecbc095c0f/nova-api-api/0.log" Jan 26 12:02:20 crc kubenswrapper[4619]: I0126 12:02:20.165308 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_65f97f5d-163a-469b-b63e-f2763404b64c/nova-cell1-conductor-conductor/0.log" Jan 26 12:02:20 crc kubenswrapper[4619]: I0126 12:02:20.312720 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_2c34909b-2fd9-4e80-b0ef-9dbf87382ee7/nova-cell1-novncproxy-novncproxy/0.log" Jan 26 12:02:20 crc kubenswrapper[4619]: I0126 12:02:20.555523 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-xjjng_b641ed88-2b99-4794-a48d-906d2355417d/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 12:02:20 crc kubenswrapper[4619]: I0126 12:02:20.682795 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_5755883f-06f0-4bf0-888d-2742d71ddf6c/nova-metadata-log/0.log" Jan 26 12:02:21 crc kubenswrapper[4619]: I0126 12:02:21.483098 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_a018ea11-c0b7-4523-b3f4-1367bb0073fd/nova-scheduler-scheduler/0.log" Jan 26 12:02:21 crc kubenswrapper[4619]: I0126 12:02:21.493394 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_675ad44b-ca9d-4f4c-947b-06184a5db736/mysql-bootstrap/0.log" Jan 26 12:02:21 crc kubenswrapper[4619]: I0126 12:02:21.707103 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_675ad44b-ca9d-4f4c-947b-06184a5db736/galera/0.log" Jan 26 12:02:21 crc kubenswrapper[4619]: I0126 12:02:21.755252 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_675ad44b-ca9d-4f4c-947b-06184a5db736/mysql-bootstrap/0.log" Jan 26 12:02:21 crc kubenswrapper[4619]: I0126 12:02:21.982670 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8f5811c2-1a5b-4fc0-aa98-a6604f266891/mysql-bootstrap/0.log" Jan 26 12:02:22 crc kubenswrapper[4619]: I0126 12:02:22.597534 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_5755883f-06f0-4bf0-888d-2742d71ddf6c/nova-metadata-metadata/0.log" Jan 26 12:02:22 crc kubenswrapper[4619]: I0126 12:02:22.618636 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8f5811c2-1a5b-4fc0-aa98-a6604f266891/mysql-bootstrap/0.log" Jan 26 12:02:22 crc kubenswrapper[4619]: I0126 12:02:22.645309 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8f5811c2-1a5b-4fc0-aa98-a6604f266891/galera/0.log" Jan 26 12:02:22 crc kubenswrapper[4619]: I0126 12:02:22.944735 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_5a4db787-7749-4a67-a52a-b8c4f3229c65/openstackclient/0.log" Jan 26 12:02:23 crc kubenswrapper[4619]: I0126 12:02:23.054980 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-djjzm_b814fe04-5ad5-4a1f-b49b-9f38ea5be2da/ovn-controller/0.log" Jan 26 12:02:23 crc kubenswrapper[4619]: I0126 12:02:23.189829 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-gbr5p_2c3f919c-3dd6-4aaf-bfd5-468a33b37fdc/openstack-network-exporter/0.log" Jan 26 12:02:23 crc kubenswrapper[4619]: I0126 12:02:23.604101 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sq2gq_1778a60a-b3d9-4f16-a8d4-8c0adf54524f/ovsdb-server-init/0.log" Jan 26 12:02:23 crc kubenswrapper[4619]: I0126 12:02:23.740860 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sq2gq_1778a60a-b3d9-4f16-a8d4-8c0adf54524f/ovs-vswitchd/0.log" Jan 26 12:02:23 crc kubenswrapper[4619]: I0126 12:02:23.834700 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sq2gq_1778a60a-b3d9-4f16-a8d4-8c0adf54524f/ovsdb-server-init/0.log" Jan 26 12:02:23 crc kubenswrapper[4619]: I0126 12:02:23.908732 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sq2gq_1778a60a-b3d9-4f16-a8d4-8c0adf54524f/ovsdb-server/0.log" Jan 26 12:02:24 crc kubenswrapper[4619]: I0126 12:02:24.142370 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-4dhp4_1bc81dbb-9f20-43c0-a7a1-cdb5c13fee99/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 12:02:24 crc kubenswrapper[4619]: I0126 12:02:24.224038 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_189b0401-ae3e-44f3-bdcc-9991a88716e8/openstack-network-exporter/0.log" Jan 26 12:02:24 crc kubenswrapper[4619]: I0126 12:02:24.364003 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_189b0401-ae3e-44f3-bdcc-9991a88716e8/ovn-northd/0.log" Jan 26 12:02:24 crc kubenswrapper[4619]: I0126 12:02:24.432507 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_cc19a957-aa75-443a-bd3a-2696241ffbd1/ovsdbserver-nb/0.log" Jan 26 12:02:24 crc kubenswrapper[4619]: I0126 12:02:24.437837 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_cc19a957-aa75-443a-bd3a-2696241ffbd1/openstack-network-exporter/0.log" Jan 26 12:02:24 crc kubenswrapper[4619]: I0126 12:02:24.696285 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_eaabd9be-2386-41dc-88ef-944ee93da789/ovsdbserver-sb/0.log" Jan 26 12:02:24 crc kubenswrapper[4619]: I0126 12:02:24.754517 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_eaabd9be-2386-41dc-88ef-944ee93da789/openstack-network-exporter/0.log" Jan 26 12:02:24 crc kubenswrapper[4619]: I0126 12:02:24.950989 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-846c8954d4-fg4cj_e9d0bb40-3939-4f19-b3e8-f31e6bb0b381/placement-api/0.log" Jan 26 12:02:25 crc kubenswrapper[4619]: I0126 12:02:25.018990 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-846c8954d4-fg4cj_e9d0bb40-3939-4f19-b3e8-f31e6bb0b381/placement-log/0.log" Jan 26 12:02:25 crc kubenswrapper[4619]: I0126 12:02:25.093606 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f4190b4f-7c04-4c14-83b4-87e224fef035/setup-container/0.log" Jan 26 12:02:25 crc kubenswrapper[4619]: I0126 12:02:25.422417 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f4190b4f-7c04-4c14-83b4-87e224fef035/rabbitmq/0.log" Jan 26 12:02:25 crc kubenswrapper[4619]: I0126 12:02:25.454016 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f4190b4f-7c04-4c14-83b4-87e224fef035/setup-container/0.log" Jan 26 12:02:25 crc kubenswrapper[4619]: I0126 12:02:25.547168 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85/setup-container/0.log" Jan 26 12:02:25 crc kubenswrapper[4619]: I0126 12:02:25.825401 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85/setup-container/0.log" Jan 26 12:02:25 crc kubenswrapper[4619]: I0126 12:02:25.829878 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-jc55v_3098c4ac-7ae9-4af9-a23f-969054a718fe/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 12:02:25 crc kubenswrapper[4619]: I0126 12:02:25.844385 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4ba704d2-02d9-4dd9-ac3b-ace8ae61cf85/rabbitmq/0.log" Jan 26 12:02:26 crc kubenswrapper[4619]: I0126 12:02:26.084631 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-wjld2_99b4b151-e965-4c8b-9a4b-22b680ea1d69/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 12:02:26 crc kubenswrapper[4619]: I0126 12:02:26.128180 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-6v9qr_af65cd01-ac28-4699-ae97-2fd8546a9925/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 12:02:26 crc kubenswrapper[4619]: I0126 12:02:26.335085 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-xz9vt_52d1c976-907c-4749-a9d2-e4a518578cbc/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 12:02:26 crc kubenswrapper[4619]: I0126 12:02:26.428738 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-qs6jh_7ea1cb52-e9e9-4fd6-9f4e-41af3d4402d3/ssh-known-hosts-edpm-deployment/0.log" Jan 26 12:02:26 crc kubenswrapper[4619]: I0126 12:02:26.705107 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-659c4b6587-4stqp_faba43f0-103d-43e7-9f3f-ef5be7ee8fe1/proxy-server/0.log" Jan 26 12:02:26 crc kubenswrapper[4619]: I0126 12:02:26.926703 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-659c4b6587-4stqp_faba43f0-103d-43e7-9f3f-ef5be7ee8fe1/proxy-httpd/0.log" Jan 26 12:02:26 crc kubenswrapper[4619]: I0126 12:02:26.933048 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-9swh4_ae54b20c-f51c-4b68-9f71-0748e5ba0c32/swift-ring-rebalance/0.log" Jan 26 12:02:26 crc kubenswrapper[4619]: I0126 12:02:26.993370 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e50c002e-11c3-4dc8-b32b-c962da06aecb/account-auditor/0.log" Jan 26 12:02:27 crc kubenswrapper[4619]: I0126 12:02:27.207556 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e50c002e-11c3-4dc8-b32b-c962da06aecb/account-reaper/0.log" Jan 26 12:02:27 crc kubenswrapper[4619]: I0126 12:02:27.215562 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e50c002e-11c3-4dc8-b32b-c962da06aecb/account-server/0.log" Jan 26 12:02:27 crc kubenswrapper[4619]: I0126 12:02:27.297769 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e50c002e-11c3-4dc8-b32b-c962da06aecb/account-replicator/0.log" Jan 26 12:02:27 crc kubenswrapper[4619]: I0126 12:02:27.312714 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e50c002e-11c3-4dc8-b32b-c962da06aecb/container-auditor/0.log" Jan 26 12:02:27 crc kubenswrapper[4619]: I0126 12:02:27.407361 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e50c002e-11c3-4dc8-b32b-c962da06aecb/container-server/0.log" Jan 26 12:02:27 crc kubenswrapper[4619]: I0126 12:02:27.588828 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e50c002e-11c3-4dc8-b32b-c962da06aecb/container-updater/0.log" Jan 26 12:02:27 crc kubenswrapper[4619]: I0126 12:02:27.597072 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e50c002e-11c3-4dc8-b32b-c962da06aecb/container-replicator/0.log" Jan 26 12:02:27 crc kubenswrapper[4619]: I0126 12:02:27.609070 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e50c002e-11c3-4dc8-b32b-c962da06aecb/object-auditor/0.log" Jan 26 12:02:27 crc kubenswrapper[4619]: I0126 12:02:27.717359 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e50c002e-11c3-4dc8-b32b-c962da06aecb/object-expirer/0.log" Jan 26 12:02:27 crc kubenswrapper[4619]: I0126 12:02:27.821402 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e50c002e-11c3-4dc8-b32b-c962da06aecb/object-replicator/0.log" Jan 26 12:02:27 crc kubenswrapper[4619]: I0126 12:02:27.925160 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e50c002e-11c3-4dc8-b32b-c962da06aecb/rsync/0.log" Jan 26 12:02:27 crc kubenswrapper[4619]: I0126 12:02:27.963551 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e50c002e-11c3-4dc8-b32b-c962da06aecb/object-updater/0.log" Jan 26 12:02:27 crc kubenswrapper[4619]: I0126 12:02:27.970283 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e50c002e-11c3-4dc8-b32b-c962da06aecb/object-server/0.log" Jan 26 12:02:28 crc kubenswrapper[4619]: I0126 12:02:28.091944 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e50c002e-11c3-4dc8-b32b-c962da06aecb/swift-recon-cron/0.log" Jan 26 12:02:28 crc kubenswrapper[4619]: I0126 12:02:28.308307 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-rrnxq_c6a5b4c8-fd30-49e5-853a-6512124a63ca/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 12:02:28 crc kubenswrapper[4619]: I0126 12:02:28.464061 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_857314c6-b4dd-4f76-8a06-2bf24b654fe3/tempest-tests-tempest-tests-runner/0.log" Jan 26 12:02:28 crc kubenswrapper[4619]: I0126 12:02:28.571314 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_06dbbb97-7e72-4105-bc6c-275ca6b8c3ee/test-operator-logs-container/0.log" Jan 26 12:02:28 crc kubenswrapper[4619]: I0126 12:02:28.695477 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-qzt54_c99e3e59-22b4-4fe8-8fa6-69845f56ef45/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 12:02:38 crc kubenswrapper[4619]: I0126 12:02:38.538300 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_8fdb3f80-9734-437b-94c1-6abcc8ce995f/memcached/0.log" Jan 26 12:02:59 crc kubenswrapper[4619]: I0126 12:02:59.164509 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-24bkx"] Jan 26 12:02:59 crc kubenswrapper[4619]: E0126 12:02:59.165559 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13789a85-58a0-432f-97ec-47574b6a9f67" containerName="container-00" Jan 26 12:02:59 crc kubenswrapper[4619]: I0126 12:02:59.165578 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="13789a85-58a0-432f-97ec-47574b6a9f67" containerName="container-00" Jan 26 12:02:59 crc kubenswrapper[4619]: E0126 12:02:59.165591 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6859f47d-3514-4db0-b900-8009ad21b665" containerName="extract-utilities" Jan 26 12:02:59 crc kubenswrapper[4619]: I0126 12:02:59.165598 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="6859f47d-3514-4db0-b900-8009ad21b665" containerName="extract-utilities" Jan 26 12:02:59 crc kubenswrapper[4619]: E0126 12:02:59.165634 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6859f47d-3514-4db0-b900-8009ad21b665" containerName="registry-server" Jan 26 12:02:59 crc kubenswrapper[4619]: I0126 12:02:59.165642 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="6859f47d-3514-4db0-b900-8009ad21b665" containerName="registry-server" Jan 26 12:02:59 crc kubenswrapper[4619]: E0126 12:02:59.165671 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6859f47d-3514-4db0-b900-8009ad21b665" containerName="extract-content" Jan 26 12:02:59 crc kubenswrapper[4619]: I0126 12:02:59.165679 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="6859f47d-3514-4db0-b900-8009ad21b665" containerName="extract-content" Jan 26 12:02:59 crc kubenswrapper[4619]: I0126 12:02:59.165915 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="13789a85-58a0-432f-97ec-47574b6a9f67" containerName="container-00" Jan 26 12:02:59 crc kubenswrapper[4619]: I0126 12:02:59.165940 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="6859f47d-3514-4db0-b900-8009ad21b665" containerName="registry-server" Jan 26 12:02:59 crc kubenswrapper[4619]: I0126 12:02:59.167695 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-24bkx" Jan 26 12:02:59 crc kubenswrapper[4619]: I0126 12:02:59.174320 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16813fea-aa5b-45f3-8827-6994e1ffbedb-utilities\") pod \"redhat-marketplace-24bkx\" (UID: \"16813fea-aa5b-45f3-8827-6994e1ffbedb\") " pod="openshift-marketplace/redhat-marketplace-24bkx" Jan 26 12:02:59 crc kubenswrapper[4619]: I0126 12:02:59.174473 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16813fea-aa5b-45f3-8827-6994e1ffbedb-catalog-content\") pod \"redhat-marketplace-24bkx\" (UID: \"16813fea-aa5b-45f3-8827-6994e1ffbedb\") " pod="openshift-marketplace/redhat-marketplace-24bkx" Jan 26 12:02:59 crc kubenswrapper[4619]: I0126 12:02:59.174597 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fckml\" (UniqueName: \"kubernetes.io/projected/16813fea-aa5b-45f3-8827-6994e1ffbedb-kube-api-access-fckml\") pod \"redhat-marketplace-24bkx\" (UID: \"16813fea-aa5b-45f3-8827-6994e1ffbedb\") " pod="openshift-marketplace/redhat-marketplace-24bkx" Jan 26 12:02:59 crc kubenswrapper[4619]: I0126 12:02:59.181859 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-24bkx"] Jan 26 12:02:59 crc kubenswrapper[4619]: I0126 12:02:59.277504 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16813fea-aa5b-45f3-8827-6994e1ffbedb-catalog-content\") pod \"redhat-marketplace-24bkx\" (UID: \"16813fea-aa5b-45f3-8827-6994e1ffbedb\") " pod="openshift-marketplace/redhat-marketplace-24bkx" Jan 26 12:02:59 crc kubenswrapper[4619]: I0126 12:02:59.277899 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16813fea-aa5b-45f3-8827-6994e1ffbedb-catalog-content\") pod \"redhat-marketplace-24bkx\" (UID: \"16813fea-aa5b-45f3-8827-6994e1ffbedb\") " pod="openshift-marketplace/redhat-marketplace-24bkx" Jan 26 12:02:59 crc kubenswrapper[4619]: I0126 12:02:59.278079 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fckml\" (UniqueName: \"kubernetes.io/projected/16813fea-aa5b-45f3-8827-6994e1ffbedb-kube-api-access-fckml\") pod \"redhat-marketplace-24bkx\" (UID: \"16813fea-aa5b-45f3-8827-6994e1ffbedb\") " pod="openshift-marketplace/redhat-marketplace-24bkx" Jan 26 12:02:59 crc kubenswrapper[4619]: I0126 12:02:59.278164 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16813fea-aa5b-45f3-8827-6994e1ffbedb-utilities\") pod \"redhat-marketplace-24bkx\" (UID: \"16813fea-aa5b-45f3-8827-6994e1ffbedb\") " pod="openshift-marketplace/redhat-marketplace-24bkx" Jan 26 12:02:59 crc kubenswrapper[4619]: I0126 12:02:59.278499 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16813fea-aa5b-45f3-8827-6994e1ffbedb-utilities\") pod \"redhat-marketplace-24bkx\" (UID: \"16813fea-aa5b-45f3-8827-6994e1ffbedb\") " pod="openshift-marketplace/redhat-marketplace-24bkx" Jan 26 12:02:59 crc kubenswrapper[4619]: I0126 12:02:59.301898 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fckml\" (UniqueName: \"kubernetes.io/projected/16813fea-aa5b-45f3-8827-6994e1ffbedb-kube-api-access-fckml\") pod \"redhat-marketplace-24bkx\" (UID: \"16813fea-aa5b-45f3-8827-6994e1ffbedb\") " pod="openshift-marketplace/redhat-marketplace-24bkx" Jan 26 12:02:59 crc kubenswrapper[4619]: I0126 12:02:59.485239 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-24bkx" Jan 26 12:02:59 crc kubenswrapper[4619]: I0126 12:02:59.963234 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck_70360edb-1325-42c3-9ffd-05d030d21375/util/0.log" Jan 26 12:03:00 crc kubenswrapper[4619]: I0126 12:03:00.091542 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-24bkx"] Jan 26 12:03:00 crc kubenswrapper[4619]: I0126 12:03:00.315991 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck_70360edb-1325-42c3-9ffd-05d030d21375/util/0.log" Jan 26 12:03:00 crc kubenswrapper[4619]: I0126 12:03:00.349504 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck_70360edb-1325-42c3-9ffd-05d030d21375/pull/0.log" Jan 26 12:03:00 crc kubenswrapper[4619]: I0126 12:03:00.363239 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck_70360edb-1325-42c3-9ffd-05d030d21375/pull/0.log" Jan 26 12:03:00 crc kubenswrapper[4619]: I0126 12:03:00.386917 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24bkx" event={"ID":"16813fea-aa5b-45f3-8827-6994e1ffbedb","Type":"ContainerStarted","Data":"2975fd2fa86f6cf0a90dcec5823522c91b3d7045598d9e17b2718d32be7ba659"} Jan 26 12:03:00 crc kubenswrapper[4619]: I0126 12:03:00.540245 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck_70360edb-1325-42c3-9ffd-05d030d21375/pull/0.log" Jan 26 12:03:00 crc kubenswrapper[4619]: I0126 12:03:00.561264 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck_70360edb-1325-42c3-9ffd-05d030d21375/util/0.log" Jan 26 12:03:00 crc kubenswrapper[4619]: I0126 12:03:00.641283 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b081bae397df11843da7f99ebab82bbc70d05ca29a03956f47af8ffe07vcck_70360edb-1325-42c3-9ffd-05d030d21375/extract/0.log" Jan 26 12:03:00 crc kubenswrapper[4619]: I0126 12:03:00.808188 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-5d6449f6dc-sd74p_78e0a81b-7050-4a6b-8f89-b1f02cf2bed4/manager/0.log" Jan 26 12:03:00 crc kubenswrapper[4619]: I0126 12:03:00.939860 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-6w9xz_0236e799-d5fb-4edf-b0cf-b40093e13c9f/manager/0.log" Jan 26 12:03:01 crc kubenswrapper[4619]: I0126 12:03:01.036264 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-qvfcm_c4c33d5c-a111-42bd-932d-7b60aaa798be/manager/0.log" Jan 26 12:03:01 crc kubenswrapper[4619]: I0126 12:03:01.268995 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-x95m2_0821bfee-e661-4cb0-9079-70ee60bdec02/manager/0.log" Jan 26 12:03:01 crc kubenswrapper[4619]: I0126 12:03:01.306155 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-2wgql_67d01b92-a260-4a23-a395-1e2c5079dbed/manager/0.log" Jan 26 12:03:01 crc kubenswrapper[4619]: I0126 12:03:01.450236 4619 generic.go:334] "Generic (PLEG): container finished" podID="16813fea-aa5b-45f3-8827-6994e1ffbedb" containerID="90e5e5a5f3e4091a02ca1d60232e313636707395538b07b5c0fd423f1ec0b773" exitCode=0 Jan 26 12:03:01 crc kubenswrapper[4619]: I0126 12:03:01.450300 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24bkx" event={"ID":"16813fea-aa5b-45f3-8827-6994e1ffbedb","Type":"ContainerDied","Data":"90e5e5a5f3e4091a02ca1d60232e313636707395538b07b5c0fd423f1ec0b773"} Jan 26 12:03:01 crc kubenswrapper[4619]: I0126 12:03:01.540809 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-g6t9k_3ada408d-b7d5-4d35-b779-65be4855e174/manager/0.log" Jan 26 12:03:01 crc kubenswrapper[4619]: I0126 12:03:01.812335 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-59hn2_d75eb578-095c-4ad4-b85d-c78417306fb0/manager/0.log" Jan 26 12:03:01 crc kubenswrapper[4619]: I0126 12:03:01.819521 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-758868c854-h44rl_817a0b42-6961-46cf-b353-38aee1dab88c/manager/0.log" Jan 26 12:03:02 crc kubenswrapper[4619]: I0126 12:03:02.049121 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-pwdwf_9bd38ee3-e401-40e3-8fdc-73722e175d2f/manager/0.log" Jan 26 12:03:02 crc kubenswrapper[4619]: I0126 12:03:02.102791 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-ltc6c_097a933b-c278-4367-881a-bbd0942d69b3/manager/0.log" Jan 26 12:03:02 crc kubenswrapper[4619]: I0126 12:03:02.250145 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-2vtpj_146ce69f-077f-483b-a7f6-d32bb6e2ad05/manager/0.log" Jan 26 12:03:02 crc kubenswrapper[4619]: I0126 12:03:02.524801 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-c8mj6_3edab216-d77f-4b95-b98b-0ed86e9b2305/manager/0.log" Jan 26 12:03:02 crc kubenswrapper[4619]: I0126 12:03:02.861280 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-fzrxh_d991f0cd-a82d-443e-b399-ab59ac238b0b/manager/0.log" Jan 26 12:03:02 crc kubenswrapper[4619]: I0126 12:03:02.934726 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-vvxv5_8d9312b1-e850-4099-b5a4-60c113f009a3/manager/0.log" Jan 26 12:03:03 crc kubenswrapper[4619]: I0126 12:03:03.144919 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854nj84s_366e3862-4a5d-447e-890e-1a1ed1d7bf5f/manager/0.log" Jan 26 12:03:03 crc kubenswrapper[4619]: I0126 12:03:03.303181 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-b888df747-blvm9_dd1e6c3c-64b1-4ced-9371-c2368efd4620/operator/0.log" Jan 26 12:03:03 crc kubenswrapper[4619]: I0126 12:03:03.481238 4619 generic.go:334] "Generic (PLEG): container finished" podID="16813fea-aa5b-45f3-8827-6994e1ffbedb" containerID="4d7b10264e364d011d12de670d3f4a7eaa15bf5378ea3f53910e0d81b3d2067f" exitCode=0 Jan 26 12:03:03 crc kubenswrapper[4619]: I0126 12:03:03.481335 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24bkx" event={"ID":"16813fea-aa5b-45f3-8827-6994e1ffbedb","Type":"ContainerDied","Data":"4d7b10264e364d011d12de670d3f4a7eaa15bf5378ea3f53910e0d81b3d2067f"} Jan 26 12:03:03 crc kubenswrapper[4619]: I0126 12:03:03.640143 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-xm4c9_c89269ce-7325-4368-8653-48d35a50ee0b/registry-server/0.log" Jan 26 12:03:03 crc kubenswrapper[4619]: I0126 12:03:03.989643 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-fdtcd_a6eb6ada-8607-4687-a235-e8c5f581e4b4/manager/0.log" Jan 26 12:03:04 crc kubenswrapper[4619]: I0126 12:03:04.031182 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-rpdwn_7ca801e8-77b6-4ea2-8bd7-4aec3c0e3c7a/manager/0.log" Jan 26 12:03:04 crc kubenswrapper[4619]: I0126 12:03:04.372096 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-77bff5b64d-pzhsq_3b4348c7-3d25-4d2b-837e-5add3c85cd30/manager/0.log" Jan 26 12:03:04 crc kubenswrapper[4619]: I0126 12:03:04.424732 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-j8r8q_9f67fdeb-3415-4da5-a78e-66f6afad477f/operator/0.log" Jan 26 12:03:04 crc kubenswrapper[4619]: I0126 12:03:04.632399 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-jcstk_ad531f03-5ce8-475e-923a-15a9561e79d0/manager/0.log" Jan 26 12:03:05 crc kubenswrapper[4619]: I0126 12:03:05.173487 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-z44mm_861bc5f6-bbf8-4626-aed7-a015389630d2/manager/0.log" Jan 26 12:03:05 crc kubenswrapper[4619]: I0126 12:03:05.191986 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-gpm8h_f2d78077-e281-4b95-a576-892bf5eaea8d/manager/0.log" Jan 26 12:03:05 crc kubenswrapper[4619]: I0126 12:03:05.266494 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-r479p_1200fb20-58ac-4e2b-aa47-d8e3bb34578b/manager/0.log" Jan 26 12:03:05 crc kubenswrapper[4619]: I0126 12:03:05.544237 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lmfgp"] Jan 26 12:03:05 crc kubenswrapper[4619]: I0126 12:03:05.546284 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lmfgp" Jan 26 12:03:05 crc kubenswrapper[4619]: I0126 12:03:05.566761 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lmfgp"] Jan 26 12:03:05 crc kubenswrapper[4619]: I0126 12:03:05.599809 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb02a430-ba3d-458c-9187-59007a3d6a0a-catalog-content\") pod \"certified-operators-lmfgp\" (UID: \"eb02a430-ba3d-458c-9187-59007a3d6a0a\") " pod="openshift-marketplace/certified-operators-lmfgp" Jan 26 12:03:05 crc kubenswrapper[4619]: I0126 12:03:05.599904 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6zs7\" (UniqueName: \"kubernetes.io/projected/eb02a430-ba3d-458c-9187-59007a3d6a0a-kube-api-access-f6zs7\") pod \"certified-operators-lmfgp\" (UID: \"eb02a430-ba3d-458c-9187-59007a3d6a0a\") " pod="openshift-marketplace/certified-operators-lmfgp" Jan 26 12:03:05 crc kubenswrapper[4619]: I0126 12:03:05.599942 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb02a430-ba3d-458c-9187-59007a3d6a0a-utilities\") pod \"certified-operators-lmfgp\" (UID: \"eb02a430-ba3d-458c-9187-59007a3d6a0a\") " pod="openshift-marketplace/certified-operators-lmfgp" Jan 26 12:03:05 crc kubenswrapper[4619]: I0126 12:03:05.702009 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb02a430-ba3d-458c-9187-59007a3d6a0a-catalog-content\") pod \"certified-operators-lmfgp\" (UID: \"eb02a430-ba3d-458c-9187-59007a3d6a0a\") " pod="openshift-marketplace/certified-operators-lmfgp" Jan 26 12:03:05 crc kubenswrapper[4619]: I0126 12:03:05.702084 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6zs7\" (UniqueName: \"kubernetes.io/projected/eb02a430-ba3d-458c-9187-59007a3d6a0a-kube-api-access-f6zs7\") pod \"certified-operators-lmfgp\" (UID: \"eb02a430-ba3d-458c-9187-59007a3d6a0a\") " pod="openshift-marketplace/certified-operators-lmfgp" Jan 26 12:03:05 crc kubenswrapper[4619]: I0126 12:03:05.702121 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb02a430-ba3d-458c-9187-59007a3d6a0a-utilities\") pod \"certified-operators-lmfgp\" (UID: \"eb02a430-ba3d-458c-9187-59007a3d6a0a\") " pod="openshift-marketplace/certified-operators-lmfgp" Jan 26 12:03:05 crc kubenswrapper[4619]: I0126 12:03:05.702488 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb02a430-ba3d-458c-9187-59007a3d6a0a-catalog-content\") pod \"certified-operators-lmfgp\" (UID: \"eb02a430-ba3d-458c-9187-59007a3d6a0a\") " pod="openshift-marketplace/certified-operators-lmfgp" Jan 26 12:03:05 crc kubenswrapper[4619]: I0126 12:03:05.702633 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb02a430-ba3d-458c-9187-59007a3d6a0a-utilities\") pod \"certified-operators-lmfgp\" (UID: \"eb02a430-ba3d-458c-9187-59007a3d6a0a\") " pod="openshift-marketplace/certified-operators-lmfgp" Jan 26 12:03:05 crc kubenswrapper[4619]: I0126 12:03:05.727885 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6zs7\" (UniqueName: \"kubernetes.io/projected/eb02a430-ba3d-458c-9187-59007a3d6a0a-kube-api-access-f6zs7\") pod \"certified-operators-lmfgp\" (UID: \"eb02a430-ba3d-458c-9187-59007a3d6a0a\") " pod="openshift-marketplace/certified-operators-lmfgp" Jan 26 12:03:05 crc kubenswrapper[4619]: I0126 12:03:05.866583 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lmfgp" Jan 26 12:03:07 crc kubenswrapper[4619]: I0126 12:03:07.072958 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lmfgp"] Jan 26 12:03:07 crc kubenswrapper[4619]: I0126 12:03:07.516552 4619 generic.go:334] "Generic (PLEG): container finished" podID="eb02a430-ba3d-458c-9187-59007a3d6a0a" containerID="cb32ab653ca1a97ca212e372c58b4c7cb19f2527fd79c58aadb57db779e4ec31" exitCode=0 Jan 26 12:03:07 crc kubenswrapper[4619]: I0126 12:03:07.516591 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmfgp" event={"ID":"eb02a430-ba3d-458c-9187-59007a3d6a0a","Type":"ContainerDied","Data":"cb32ab653ca1a97ca212e372c58b4c7cb19f2527fd79c58aadb57db779e4ec31"} Jan 26 12:03:07 crc kubenswrapper[4619]: I0126 12:03:07.516650 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmfgp" event={"ID":"eb02a430-ba3d-458c-9187-59007a3d6a0a","Type":"ContainerStarted","Data":"b7c4b3848067aa9128a082f208dd1bb1b0622ac71bff4bba5891e90b651b2b16"} Jan 26 12:03:08 crc kubenswrapper[4619]: I0126 12:03:08.528421 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24bkx" event={"ID":"16813fea-aa5b-45f3-8827-6994e1ffbedb","Type":"ContainerStarted","Data":"8a4bd2dc4dd01554d9656163155f11de26bd91aa7f4fa82f95ec66a3229c70d7"} Jan 26 12:03:08 crc kubenswrapper[4619]: I0126 12:03:08.589265 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-24bkx" podStartSLOduration=3.6925849939999997 podStartE2EDuration="9.589239085s" podCreationTimestamp="2026-01-26 12:02:59 +0000 UTC" firstStartedPulling="2026-01-26 12:03:01.467883131 +0000 UTC m=+4080.501923877" lastFinishedPulling="2026-01-26 12:03:07.364537252 +0000 UTC m=+4086.398577968" observedRunningTime="2026-01-26 12:03:08.556240876 +0000 UTC m=+4087.590281592" watchObservedRunningTime="2026-01-26 12:03:08.589239085 +0000 UTC m=+4087.623279801" Jan 26 12:03:09 crc kubenswrapper[4619]: I0126 12:03:09.486000 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-24bkx" Jan 26 12:03:09 crc kubenswrapper[4619]: I0126 12:03:09.486335 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-24bkx" Jan 26 12:03:09 crc kubenswrapper[4619]: I0126 12:03:09.542098 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmfgp" event={"ID":"eb02a430-ba3d-458c-9187-59007a3d6a0a","Type":"ContainerStarted","Data":"497c8a415778e13f3aae6cba5748cd14508196a84fc2f60637651e851a29cc81"} Jan 26 12:03:09 crc kubenswrapper[4619]: I0126 12:03:09.577553 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-24bkx" Jan 26 12:03:10 crc kubenswrapper[4619]: I0126 12:03:10.553103 4619 generic.go:334] "Generic (PLEG): container finished" podID="eb02a430-ba3d-458c-9187-59007a3d6a0a" containerID="497c8a415778e13f3aae6cba5748cd14508196a84fc2f60637651e851a29cc81" exitCode=0 Jan 26 12:03:10 crc kubenswrapper[4619]: I0126 12:03:10.553319 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmfgp" event={"ID":"eb02a430-ba3d-458c-9187-59007a3d6a0a","Type":"ContainerDied","Data":"497c8a415778e13f3aae6cba5748cd14508196a84fc2f60637651e851a29cc81"} Jan 26 12:03:11 crc kubenswrapper[4619]: I0126 12:03:11.563742 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmfgp" event={"ID":"eb02a430-ba3d-458c-9187-59007a3d6a0a","Type":"ContainerStarted","Data":"433a02853d3b83f580ae03347d508fe6bdeb4071e41a91ec72edfd62325c544b"} Jan 26 12:03:11 crc kubenswrapper[4619]: I0126 12:03:11.594974 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lmfgp" podStartSLOduration=2.856182233 podStartE2EDuration="6.594957746s" podCreationTimestamp="2026-01-26 12:03:05 +0000 UTC" firstStartedPulling="2026-01-26 12:03:07.518649463 +0000 UTC m=+4086.552690179" lastFinishedPulling="2026-01-26 12:03:11.257424976 +0000 UTC m=+4090.291465692" observedRunningTime="2026-01-26 12:03:11.586801524 +0000 UTC m=+4090.620842250" watchObservedRunningTime="2026-01-26 12:03:11.594957746 +0000 UTC m=+4090.628998462" Jan 26 12:03:15 crc kubenswrapper[4619]: I0126 12:03:15.867019 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lmfgp" Jan 26 12:03:15 crc kubenswrapper[4619]: I0126 12:03:15.867537 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lmfgp" Jan 26 12:03:15 crc kubenswrapper[4619]: I0126 12:03:15.940958 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lmfgp" Jan 26 12:03:16 crc kubenswrapper[4619]: I0126 12:03:16.660597 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lmfgp" Jan 26 12:03:16 crc kubenswrapper[4619]: I0126 12:03:16.715125 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lmfgp"] Jan 26 12:03:18 crc kubenswrapper[4619]: I0126 12:03:18.617545 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lmfgp" podUID="eb02a430-ba3d-458c-9187-59007a3d6a0a" containerName="registry-server" containerID="cri-o://433a02853d3b83f580ae03347d508fe6bdeb4071e41a91ec72edfd62325c544b" gracePeriod=2 Jan 26 12:03:19 crc kubenswrapper[4619]: I0126 12:03:19.173269 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lmfgp" Jan 26 12:03:19 crc kubenswrapper[4619]: I0126 12:03:19.258501 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb02a430-ba3d-458c-9187-59007a3d6a0a-catalog-content\") pod \"eb02a430-ba3d-458c-9187-59007a3d6a0a\" (UID: \"eb02a430-ba3d-458c-9187-59007a3d6a0a\") " Jan 26 12:03:19 crc kubenswrapper[4619]: I0126 12:03:19.258645 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6zs7\" (UniqueName: \"kubernetes.io/projected/eb02a430-ba3d-458c-9187-59007a3d6a0a-kube-api-access-f6zs7\") pod \"eb02a430-ba3d-458c-9187-59007a3d6a0a\" (UID: \"eb02a430-ba3d-458c-9187-59007a3d6a0a\") " Jan 26 12:03:19 crc kubenswrapper[4619]: I0126 12:03:19.258710 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb02a430-ba3d-458c-9187-59007a3d6a0a-utilities\") pod \"eb02a430-ba3d-458c-9187-59007a3d6a0a\" (UID: \"eb02a430-ba3d-458c-9187-59007a3d6a0a\") " Jan 26 12:03:19 crc kubenswrapper[4619]: I0126 12:03:19.265796 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb02a430-ba3d-458c-9187-59007a3d6a0a-utilities" (OuterVolumeSpecName: "utilities") pod "eb02a430-ba3d-458c-9187-59007a3d6a0a" (UID: "eb02a430-ba3d-458c-9187-59007a3d6a0a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:03:19 crc kubenswrapper[4619]: I0126 12:03:19.272125 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb02a430-ba3d-458c-9187-59007a3d6a0a-kube-api-access-f6zs7" (OuterVolumeSpecName: "kube-api-access-f6zs7") pod "eb02a430-ba3d-458c-9187-59007a3d6a0a" (UID: "eb02a430-ba3d-458c-9187-59007a3d6a0a"). InnerVolumeSpecName "kube-api-access-f6zs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:03:19 crc kubenswrapper[4619]: I0126 12:03:19.333993 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb02a430-ba3d-458c-9187-59007a3d6a0a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb02a430-ba3d-458c-9187-59007a3d6a0a" (UID: "eb02a430-ba3d-458c-9187-59007a3d6a0a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:03:19 crc kubenswrapper[4619]: I0126 12:03:19.361233 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb02a430-ba3d-458c-9187-59007a3d6a0a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:03:19 crc kubenswrapper[4619]: I0126 12:03:19.361260 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6zs7\" (UniqueName: \"kubernetes.io/projected/eb02a430-ba3d-458c-9187-59007a3d6a0a-kube-api-access-f6zs7\") on node \"crc\" DevicePath \"\"" Jan 26 12:03:19 crc kubenswrapper[4619]: I0126 12:03:19.361271 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb02a430-ba3d-458c-9187-59007a3d6a0a-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:03:19 crc kubenswrapper[4619]: I0126 12:03:19.543257 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-24bkx" Jan 26 12:03:19 crc kubenswrapper[4619]: I0126 12:03:19.591466 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-24bkx"] Jan 26 12:03:19 crc kubenswrapper[4619]: I0126 12:03:19.626753 4619 generic.go:334] "Generic (PLEG): container finished" podID="eb02a430-ba3d-458c-9187-59007a3d6a0a" containerID="433a02853d3b83f580ae03347d508fe6bdeb4071e41a91ec72edfd62325c544b" exitCode=0 Jan 26 12:03:19 crc kubenswrapper[4619]: I0126 12:03:19.626941 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-24bkx" podUID="16813fea-aa5b-45f3-8827-6994e1ffbedb" containerName="registry-server" containerID="cri-o://8a4bd2dc4dd01554d9656163155f11de26bd91aa7f4fa82f95ec66a3229c70d7" gracePeriod=2 Jan 26 12:03:19 crc kubenswrapper[4619]: I0126 12:03:19.627031 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lmfgp" Jan 26 12:03:19 crc kubenswrapper[4619]: I0126 12:03:19.628741 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmfgp" event={"ID":"eb02a430-ba3d-458c-9187-59007a3d6a0a","Type":"ContainerDied","Data":"433a02853d3b83f580ae03347d508fe6bdeb4071e41a91ec72edfd62325c544b"} Jan 26 12:03:19 crc kubenswrapper[4619]: I0126 12:03:19.628781 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmfgp" event={"ID":"eb02a430-ba3d-458c-9187-59007a3d6a0a","Type":"ContainerDied","Data":"b7c4b3848067aa9128a082f208dd1bb1b0622ac71bff4bba5891e90b651b2b16"} Jan 26 12:03:19 crc kubenswrapper[4619]: I0126 12:03:19.628800 4619 scope.go:117] "RemoveContainer" containerID="433a02853d3b83f580ae03347d508fe6bdeb4071e41a91ec72edfd62325c544b" Jan 26 12:03:19 crc kubenswrapper[4619]: I0126 12:03:19.671452 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lmfgp"] Jan 26 12:03:19 crc kubenswrapper[4619]: I0126 12:03:19.673752 4619 scope.go:117] "RemoveContainer" containerID="497c8a415778e13f3aae6cba5748cd14508196a84fc2f60637651e851a29cc81" Jan 26 12:03:19 crc kubenswrapper[4619]: I0126 12:03:19.680760 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lmfgp"] Jan 26 12:03:19 crc kubenswrapper[4619]: I0126 12:03:19.701970 4619 scope.go:117] "RemoveContainer" containerID="cb32ab653ca1a97ca212e372c58b4c7cb19f2527fd79c58aadb57db779e4ec31" Jan 26 12:03:19 crc kubenswrapper[4619]: I0126 12:03:19.788970 4619 scope.go:117] "RemoveContainer" containerID="433a02853d3b83f580ae03347d508fe6bdeb4071e41a91ec72edfd62325c544b" Jan 26 12:03:19 crc kubenswrapper[4619]: E0126 12:03:19.789357 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"433a02853d3b83f580ae03347d508fe6bdeb4071e41a91ec72edfd62325c544b\": container with ID starting with 433a02853d3b83f580ae03347d508fe6bdeb4071e41a91ec72edfd62325c544b not found: ID does not exist" containerID="433a02853d3b83f580ae03347d508fe6bdeb4071e41a91ec72edfd62325c544b" Jan 26 12:03:19 crc kubenswrapper[4619]: I0126 12:03:19.789400 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"433a02853d3b83f580ae03347d508fe6bdeb4071e41a91ec72edfd62325c544b"} err="failed to get container status \"433a02853d3b83f580ae03347d508fe6bdeb4071e41a91ec72edfd62325c544b\": rpc error: code = NotFound desc = could not find container \"433a02853d3b83f580ae03347d508fe6bdeb4071e41a91ec72edfd62325c544b\": container with ID starting with 433a02853d3b83f580ae03347d508fe6bdeb4071e41a91ec72edfd62325c544b not found: ID does not exist" Jan 26 12:03:19 crc kubenswrapper[4619]: I0126 12:03:19.789426 4619 scope.go:117] "RemoveContainer" containerID="497c8a415778e13f3aae6cba5748cd14508196a84fc2f60637651e851a29cc81" Jan 26 12:03:19 crc kubenswrapper[4619]: E0126 12:03:19.789712 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"497c8a415778e13f3aae6cba5748cd14508196a84fc2f60637651e851a29cc81\": container with ID starting with 497c8a415778e13f3aae6cba5748cd14508196a84fc2f60637651e851a29cc81 not found: ID does not exist" containerID="497c8a415778e13f3aae6cba5748cd14508196a84fc2f60637651e851a29cc81" Jan 26 12:03:19 crc kubenswrapper[4619]: I0126 12:03:19.789730 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"497c8a415778e13f3aae6cba5748cd14508196a84fc2f60637651e851a29cc81"} err="failed to get container status \"497c8a415778e13f3aae6cba5748cd14508196a84fc2f60637651e851a29cc81\": rpc error: code = NotFound desc = could not find container \"497c8a415778e13f3aae6cba5748cd14508196a84fc2f60637651e851a29cc81\": container with ID starting with 497c8a415778e13f3aae6cba5748cd14508196a84fc2f60637651e851a29cc81 not found: ID does not exist" Jan 26 12:03:19 crc kubenswrapper[4619]: I0126 12:03:19.789742 4619 scope.go:117] "RemoveContainer" containerID="cb32ab653ca1a97ca212e372c58b4c7cb19f2527fd79c58aadb57db779e4ec31" Jan 26 12:03:19 crc kubenswrapper[4619]: E0126 12:03:19.789924 4619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb32ab653ca1a97ca212e372c58b4c7cb19f2527fd79c58aadb57db779e4ec31\": container with ID starting with cb32ab653ca1a97ca212e372c58b4c7cb19f2527fd79c58aadb57db779e4ec31 not found: ID does not exist" containerID="cb32ab653ca1a97ca212e372c58b4c7cb19f2527fd79c58aadb57db779e4ec31" Jan 26 12:03:19 crc kubenswrapper[4619]: I0126 12:03:19.789938 4619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb32ab653ca1a97ca212e372c58b4c7cb19f2527fd79c58aadb57db779e4ec31"} err="failed to get container status \"cb32ab653ca1a97ca212e372c58b4c7cb19f2527fd79c58aadb57db779e4ec31\": rpc error: code = NotFound desc = could not find container \"cb32ab653ca1a97ca212e372c58b4c7cb19f2527fd79c58aadb57db779e4ec31\": container with ID starting with cb32ab653ca1a97ca212e372c58b4c7cb19f2527fd79c58aadb57db779e4ec31 not found: ID does not exist" Jan 26 12:03:20 crc kubenswrapper[4619]: I0126 12:03:20.639992 4619 generic.go:334] "Generic (PLEG): container finished" podID="16813fea-aa5b-45f3-8827-6994e1ffbedb" containerID="8a4bd2dc4dd01554d9656163155f11de26bd91aa7f4fa82f95ec66a3229c70d7" exitCode=0 Jan 26 12:03:20 crc kubenswrapper[4619]: I0126 12:03:20.640850 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24bkx" event={"ID":"16813fea-aa5b-45f3-8827-6994e1ffbedb","Type":"ContainerDied","Data":"8a4bd2dc4dd01554d9656163155f11de26bd91aa7f4fa82f95ec66a3229c70d7"} Jan 26 12:03:20 crc kubenswrapper[4619]: I0126 12:03:20.641205 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24bkx" event={"ID":"16813fea-aa5b-45f3-8827-6994e1ffbedb","Type":"ContainerDied","Data":"2975fd2fa86f6cf0a90dcec5823522c91b3d7045598d9e17b2718d32be7ba659"} Jan 26 12:03:20 crc kubenswrapper[4619]: I0126 12:03:20.641316 4619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2975fd2fa86f6cf0a90dcec5823522c91b3d7045598d9e17b2718d32be7ba659" Jan 26 12:03:20 crc kubenswrapper[4619]: I0126 12:03:20.697896 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-24bkx" Jan 26 12:03:20 crc kubenswrapper[4619]: I0126 12:03:20.790106 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16813fea-aa5b-45f3-8827-6994e1ffbedb-utilities\") pod \"16813fea-aa5b-45f3-8827-6994e1ffbedb\" (UID: \"16813fea-aa5b-45f3-8827-6994e1ffbedb\") " Jan 26 12:03:20 crc kubenswrapper[4619]: I0126 12:03:20.790175 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fckml\" (UniqueName: \"kubernetes.io/projected/16813fea-aa5b-45f3-8827-6994e1ffbedb-kube-api-access-fckml\") pod \"16813fea-aa5b-45f3-8827-6994e1ffbedb\" (UID: \"16813fea-aa5b-45f3-8827-6994e1ffbedb\") " Jan 26 12:03:20 crc kubenswrapper[4619]: I0126 12:03:20.790382 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16813fea-aa5b-45f3-8827-6994e1ffbedb-catalog-content\") pod \"16813fea-aa5b-45f3-8827-6994e1ffbedb\" (UID: \"16813fea-aa5b-45f3-8827-6994e1ffbedb\") " Jan 26 12:03:20 crc kubenswrapper[4619]: I0126 12:03:20.791304 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16813fea-aa5b-45f3-8827-6994e1ffbedb-utilities" (OuterVolumeSpecName: "utilities") pod "16813fea-aa5b-45f3-8827-6994e1ffbedb" (UID: "16813fea-aa5b-45f3-8827-6994e1ffbedb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:03:20 crc kubenswrapper[4619]: I0126 12:03:20.791784 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16813fea-aa5b-45f3-8827-6994e1ffbedb-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:03:20 crc kubenswrapper[4619]: I0126 12:03:20.795169 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16813fea-aa5b-45f3-8827-6994e1ffbedb-kube-api-access-fckml" (OuterVolumeSpecName: "kube-api-access-fckml") pod "16813fea-aa5b-45f3-8827-6994e1ffbedb" (UID: "16813fea-aa5b-45f3-8827-6994e1ffbedb"). InnerVolumeSpecName "kube-api-access-fckml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:03:20 crc kubenswrapper[4619]: I0126 12:03:20.829383 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16813fea-aa5b-45f3-8827-6994e1ffbedb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16813fea-aa5b-45f3-8827-6994e1ffbedb" (UID: "16813fea-aa5b-45f3-8827-6994e1ffbedb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:03:20 crc kubenswrapper[4619]: I0126 12:03:20.893039 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16813fea-aa5b-45f3-8827-6994e1ffbedb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:03:20 crc kubenswrapper[4619]: I0126 12:03:20.893081 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fckml\" (UniqueName: \"kubernetes.io/projected/16813fea-aa5b-45f3-8827-6994e1ffbedb-kube-api-access-fckml\") on node \"crc\" DevicePath \"\"" Jan 26 12:03:21 crc kubenswrapper[4619]: I0126 12:03:21.274350 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb02a430-ba3d-458c-9187-59007a3d6a0a" path="/var/lib/kubelet/pods/eb02a430-ba3d-458c-9187-59007a3d6a0a/volumes" Jan 26 12:03:21 crc kubenswrapper[4619]: I0126 12:03:21.649858 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-24bkx" Jan 26 12:03:21 crc kubenswrapper[4619]: I0126 12:03:21.694936 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-24bkx"] Jan 26 12:03:21 crc kubenswrapper[4619]: I0126 12:03:21.706297 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-24bkx"] Jan 26 12:03:23 crc kubenswrapper[4619]: I0126 12:03:23.270777 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16813fea-aa5b-45f3-8827-6994e1ffbedb" path="/var/lib/kubelet/pods/16813fea-aa5b-45f3-8827-6994e1ffbedb/volumes" Jan 26 12:03:30 crc kubenswrapper[4619]: I0126 12:03:30.880011 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-l54p9_696254c7-95ab-43d9-9919-5d1146eec08e/control-plane-machine-set-operator/0.log" Jan 26 12:03:31 crc kubenswrapper[4619]: I0126 12:03:31.049300 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hk46v_0bff9a47-4685-457d-8a24-6139113cdbd8/kube-rbac-proxy/0.log" Jan 26 12:03:31 crc kubenswrapper[4619]: I0126 12:03:31.121945 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hk46v_0bff9a47-4685-457d-8a24-6139113cdbd8/machine-api-operator/0.log" Jan 26 12:03:44 crc kubenswrapper[4619]: I0126 12:03:44.209198 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-pzvld_b4ddb0da-8d36-41cf-a6f1-f02a48086888/cert-manager-controller/0.log" Jan 26 12:03:44 crc kubenswrapper[4619]: I0126 12:03:44.234384 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:03:44 crc kubenswrapper[4619]: I0126 12:03:44.235332 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:03:44 crc kubenswrapper[4619]: I0126 12:03:44.396444 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-nnd7f_cb296f0e-c4e5-4b2b-82de-49af144cbf77/cert-manager-cainjector/0.log" Jan 26 12:03:44 crc kubenswrapper[4619]: I0126 12:03:44.481850 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-46rtj_e256cee0-a4f8-46ca-bad9-4abc6bf31216/cert-manager-webhook/0.log" Jan 26 12:03:58 crc kubenswrapper[4619]: I0126 12:03:57.999795 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-f7qjj_9ffa32f9-021a-405b-920e-5fb684f8d8e4/nmstate-console-plugin/0.log" Jan 26 12:03:58 crc kubenswrapper[4619]: I0126 12:03:58.219271 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-9k8lb_b8830012-0a44-4256-b546-b00b81d136cf/nmstate-handler/0.log" Jan 26 12:03:58 crc kubenswrapper[4619]: I0126 12:03:58.298575 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-nh758_b2d70641-12a7-4923-8fa0-f09a91915630/kube-rbac-proxy/0.log" Jan 26 12:03:58 crc kubenswrapper[4619]: I0126 12:03:58.396318 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-nh758_b2d70641-12a7-4923-8fa0-f09a91915630/nmstate-metrics/0.log" Jan 26 12:03:58 crc kubenswrapper[4619]: I0126 12:03:58.533817 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-cnsxv_a05d614f-24d2-4005-9110-1a002d0670ae/nmstate-operator/0.log" Jan 26 12:03:58 crc kubenswrapper[4619]: I0126 12:03:58.631028 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-56j7m_b2e020f6-4ac4-407d-9eb9-96f1072d01ab/nmstate-webhook/0.log" Jan 26 12:04:14 crc kubenswrapper[4619]: I0126 12:04:14.234186 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:04:14 crc kubenswrapper[4619]: I0126 12:04:14.235585 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:04:27 crc kubenswrapper[4619]: I0126 12:04:27.273657 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-nz7bv_72d807ee-cc40-44fc-b153-c36c4bb75332/kube-rbac-proxy/0.log" Jan 26 12:04:27 crc kubenswrapper[4619]: I0126 12:04:27.302315 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-nz7bv_72d807ee-cc40-44fc-b153-c36c4bb75332/controller/0.log" Jan 26 12:04:27 crc kubenswrapper[4619]: I0126 12:04:27.483733 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/cp-frr-files/0.log" Jan 26 12:04:27 crc kubenswrapper[4619]: I0126 12:04:27.661082 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/cp-reloader/0.log" Jan 26 12:04:27 crc kubenswrapper[4619]: I0126 12:04:27.667434 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/cp-frr-files/0.log" Jan 26 12:04:27 crc kubenswrapper[4619]: I0126 12:04:27.727089 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/cp-reloader/0.log" Jan 26 12:04:27 crc kubenswrapper[4619]: I0126 12:04:27.741660 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/cp-metrics/0.log" Jan 26 12:04:27 crc kubenswrapper[4619]: I0126 12:04:27.949954 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/cp-reloader/0.log" Jan 26 12:04:28 crc kubenswrapper[4619]: I0126 12:04:28.002561 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/cp-metrics/0.log" Jan 26 12:04:28 crc kubenswrapper[4619]: I0126 12:04:28.003249 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/cp-metrics/0.log" Jan 26 12:04:28 crc kubenswrapper[4619]: I0126 12:04:28.011633 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/cp-frr-files/0.log" Jan 26 12:04:28 crc kubenswrapper[4619]: I0126 12:04:28.161988 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/cp-reloader/0.log" Jan 26 12:04:28 crc kubenswrapper[4619]: I0126 12:04:28.211894 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/cp-frr-files/0.log" Jan 26 12:04:28 crc kubenswrapper[4619]: I0126 12:04:28.240748 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/cp-metrics/0.log" Jan 26 12:04:28 crc kubenswrapper[4619]: I0126 12:04:28.292633 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/controller/0.log" Jan 26 12:04:28 crc kubenswrapper[4619]: I0126 12:04:28.427898 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/frr-metrics/0.log" Jan 26 12:04:28 crc kubenswrapper[4619]: I0126 12:04:28.563543 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/kube-rbac-proxy/0.log" Jan 26 12:04:28 crc kubenswrapper[4619]: I0126 12:04:28.570744 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/kube-rbac-proxy-frr/0.log" Jan 26 12:04:28 crc kubenswrapper[4619]: I0126 12:04:28.697713 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/reloader/0.log" Jan 26 12:04:29 crc kubenswrapper[4619]: I0126 12:04:29.213416 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-jwd5k_c511ad3e-52ab-4e39-bbed-f795da1b29e8/frr-k8s-webhook-server/0.log" Jan 26 12:04:29 crc kubenswrapper[4619]: I0126 12:04:29.538133 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-f5d78644f-q9qtr_4c81b5cf-0f17-4d7b-bfd8-ee67be620339/manager/0.log" Jan 26 12:04:29 crc kubenswrapper[4619]: I0126 12:04:29.587315 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfz7r_9681692a-bb51-4dce-aa10-c85852bff137/frr/0.log" Jan 26 12:04:29 crc kubenswrapper[4619]: I0126 12:04:29.638137 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7575bfd756-qfvd5_50eeef8d-b0ff-4b67-86ef-68febf4bcc0b/webhook-server/0.log" Jan 26 12:04:29 crc kubenswrapper[4619]: I0126 12:04:29.797554 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fdh28_a3fb0354-e5ca-4c6c-a008-44355d8dd331/kube-rbac-proxy/0.log" Jan 26 12:04:30 crc kubenswrapper[4619]: I0126 12:04:30.217206 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fdh28_a3fb0354-e5ca-4c6c-a008-44355d8dd331/speaker/0.log" Jan 26 12:04:43 crc kubenswrapper[4619]: I0126 12:04:43.615411 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5_f84b7e47-460a-490b-b407-ab46935b44ea/util/0.log" Jan 26 12:04:43 crc kubenswrapper[4619]: I0126 12:04:43.882262 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5_f84b7e47-460a-490b-b407-ab46935b44ea/util/0.log" Jan 26 12:04:43 crc kubenswrapper[4619]: I0126 12:04:43.939958 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5_f84b7e47-460a-490b-b407-ab46935b44ea/pull/0.log" Jan 26 12:04:43 crc kubenswrapper[4619]: I0126 12:04:43.940165 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5_f84b7e47-460a-490b-b407-ab46935b44ea/pull/0.log" Jan 26 12:04:44 crc kubenswrapper[4619]: I0126 12:04:44.105109 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5_f84b7e47-460a-490b-b407-ab46935b44ea/pull/0.log" Jan 26 12:04:44 crc kubenswrapper[4619]: I0126 12:04:44.142888 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5_f84b7e47-460a-490b-b407-ab46935b44ea/util/0.log" Jan 26 12:04:44 crc kubenswrapper[4619]: I0126 12:04:44.159444 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmqvp5_f84b7e47-460a-490b-b407-ab46935b44ea/extract/0.log" Jan 26 12:04:44 crc kubenswrapper[4619]: I0126 12:04:44.235087 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:04:44 crc kubenswrapper[4619]: I0126 12:04:44.236206 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:04:44 crc kubenswrapper[4619]: I0126 12:04:44.236368 4619 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" Jan 26 12:04:44 crc kubenswrapper[4619]: I0126 12:04:44.238887 4619 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"64350cc60f5e403d6072c0e2fa04966874fe67e1aef5a089878bd6d50d70e8ec"} pod="openshift-machine-config-operator/machine-config-daemon-28hd4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 12:04:44 crc kubenswrapper[4619]: I0126 12:04:44.239185 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" containerID="cri-o://64350cc60f5e403d6072c0e2fa04966874fe67e1aef5a089878bd6d50d70e8ec" gracePeriod=600 Jan 26 12:04:44 crc kubenswrapper[4619]: I0126 12:04:44.397479 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr_803f8495-c340-44e0-9b75-18fc9a944fd7/util/0.log" Jan 26 12:04:44 crc kubenswrapper[4619]: I0126 12:04:44.532819 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr_803f8495-c340-44e0-9b75-18fc9a944fd7/util/0.log" Jan 26 12:04:44 crc kubenswrapper[4619]: I0126 12:04:44.556574 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr_803f8495-c340-44e0-9b75-18fc9a944fd7/pull/0.log" Jan 26 12:04:44 crc kubenswrapper[4619]: I0126 12:04:44.601408 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr_803f8495-c340-44e0-9b75-18fc9a944fd7/pull/0.log" Jan 26 12:04:44 crc kubenswrapper[4619]: I0126 12:04:44.740692 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr_803f8495-c340-44e0-9b75-18fc9a944fd7/util/0.log" Jan 26 12:04:44 crc kubenswrapper[4619]: I0126 12:04:44.776647 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr_803f8495-c340-44e0-9b75-18fc9a944fd7/extract/0.log" Jan 26 12:04:44 crc kubenswrapper[4619]: I0126 12:04:44.792135 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ks2vr_803f8495-c340-44e0-9b75-18fc9a944fd7/pull/0.log" Jan 26 12:04:45 crc kubenswrapper[4619]: I0126 12:04:45.005768 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fxj2j_b7d685d9-1721-485a-b578-d56fa3c14d91/extract-utilities/0.log" Jan 26 12:04:45 crc kubenswrapper[4619]: I0126 12:04:45.275188 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fxj2j_b7d685d9-1721-485a-b578-d56fa3c14d91/extract-content/0.log" Jan 26 12:04:45 crc kubenswrapper[4619]: I0126 12:04:45.298595 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fxj2j_b7d685d9-1721-485a-b578-d56fa3c14d91/extract-utilities/0.log" Jan 26 12:04:45 crc kubenswrapper[4619]: I0126 12:04:45.344014 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fxj2j_b7d685d9-1721-485a-b578-d56fa3c14d91/extract-content/0.log" Jan 26 12:04:45 crc kubenswrapper[4619]: I0126 12:04:45.378206 4619 generic.go:334] "Generic (PLEG): container finished" podID="f33a41bb-6406-4c73-8024-4acd72817832" containerID="64350cc60f5e403d6072c0e2fa04966874fe67e1aef5a089878bd6d50d70e8ec" exitCode=0 Jan 26 12:04:45 crc kubenswrapper[4619]: I0126 12:04:45.378242 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerDied","Data":"64350cc60f5e403d6072c0e2fa04966874fe67e1aef5a089878bd6d50d70e8ec"} Jan 26 12:04:45 crc kubenswrapper[4619]: I0126 12:04:45.378268 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerStarted","Data":"74c245e303a38f7509d3bab338af309e8e53bf75399b4fafb1f17057fb0358b3"} Jan 26 12:04:45 crc kubenswrapper[4619]: I0126 12:04:45.378283 4619 scope.go:117] "RemoveContainer" containerID="66eae0a5212e47155ed1c1f31470a8210b6beb2b56a28c3fcecfeb831bb1f5d4" Jan 26 12:04:45 crc kubenswrapper[4619]: I0126 12:04:45.562835 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fxj2j_b7d685d9-1721-485a-b578-d56fa3c14d91/extract-utilities/0.log" Jan 26 12:04:45 crc kubenswrapper[4619]: I0126 12:04:45.726681 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fxj2j_b7d685d9-1721-485a-b578-d56fa3c14d91/extract-content/0.log" Jan 26 12:04:45 crc kubenswrapper[4619]: I0126 12:04:45.846994 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zhptm_d179e424-352f-4f5b-afd3-c68b8e79c096/extract-utilities/0.log" Jan 26 12:04:46 crc kubenswrapper[4619]: I0126 12:04:46.092483 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zhptm_d179e424-352f-4f5b-afd3-c68b8e79c096/extract-utilities/0.log" Jan 26 12:04:46 crc kubenswrapper[4619]: I0126 12:04:46.135540 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zhptm_d179e424-352f-4f5b-afd3-c68b8e79c096/extract-content/0.log" Jan 26 12:04:46 crc kubenswrapper[4619]: I0126 12:04:46.233176 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fxj2j_b7d685d9-1721-485a-b578-d56fa3c14d91/registry-server/0.log" Jan 26 12:04:46 crc kubenswrapper[4619]: I0126 12:04:46.244608 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zhptm_d179e424-352f-4f5b-afd3-c68b8e79c096/extract-content/0.log" Jan 26 12:04:46 crc kubenswrapper[4619]: I0126 12:04:46.386547 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zhptm_d179e424-352f-4f5b-afd3-c68b8e79c096/extract-content/0.log" Jan 26 12:04:46 crc kubenswrapper[4619]: I0126 12:04:46.520804 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zhptm_d179e424-352f-4f5b-afd3-c68b8e79c096/extract-utilities/0.log" Jan 26 12:04:46 crc kubenswrapper[4619]: I0126 12:04:46.765780 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-9klzt_d5665bb8-e8d9-4970-a1d0-db862b679458/marketplace-operator/0.log" Jan 26 12:04:46 crc kubenswrapper[4619]: I0126 12:04:46.860203 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pkzzx_2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b/extract-utilities/0.log" Jan 26 12:04:46 crc kubenswrapper[4619]: I0126 12:04:46.910122 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zhptm_d179e424-352f-4f5b-afd3-c68b8e79c096/registry-server/0.log" Jan 26 12:04:47 crc kubenswrapper[4619]: I0126 12:04:47.071852 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pkzzx_2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b/extract-content/0.log" Jan 26 12:04:47 crc kubenswrapper[4619]: I0126 12:04:47.099267 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pkzzx_2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b/extract-content/0.log" Jan 26 12:04:47 crc kubenswrapper[4619]: I0126 12:04:47.131008 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pkzzx_2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b/extract-utilities/0.log" Jan 26 12:04:47 crc kubenswrapper[4619]: I0126 12:04:47.310444 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pkzzx_2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b/extract-content/0.log" Jan 26 12:04:47 crc kubenswrapper[4619]: I0126 12:04:47.314866 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pkzzx_2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b/extract-utilities/0.log" Jan 26 12:04:47 crc kubenswrapper[4619]: I0126 12:04:47.489354 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pkzzx_2fc177c5-7cbd-4962-97d4-89b9b2f7ba3b/registry-server/0.log" Jan 26 12:04:47 crc kubenswrapper[4619]: I0126 12:04:47.597214 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lqtxs_330030a7-d5b2-44ba-8612-30cd6ff41451/extract-utilities/0.log" Jan 26 12:04:47 crc kubenswrapper[4619]: I0126 12:04:47.776447 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lqtxs_330030a7-d5b2-44ba-8612-30cd6ff41451/extract-utilities/0.log" Jan 26 12:04:47 crc kubenswrapper[4619]: I0126 12:04:47.796840 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lqtxs_330030a7-d5b2-44ba-8612-30cd6ff41451/extract-content/0.log" Jan 26 12:04:47 crc kubenswrapper[4619]: I0126 12:04:47.828982 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lqtxs_330030a7-d5b2-44ba-8612-30cd6ff41451/extract-content/0.log" Jan 26 12:04:48 crc kubenswrapper[4619]: I0126 12:04:48.090927 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lqtxs_330030a7-d5b2-44ba-8612-30cd6ff41451/extract-utilities/0.log" Jan 26 12:04:48 crc kubenswrapper[4619]: I0126 12:04:48.135322 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lqtxs_330030a7-d5b2-44ba-8612-30cd6ff41451/extract-content/0.log" Jan 26 12:04:48 crc kubenswrapper[4619]: I0126 12:04:48.563947 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lqtxs_330030a7-d5b2-44ba-8612-30cd6ff41451/registry-server/0.log" Jan 26 12:06:47 crc kubenswrapper[4619]: I0126 12:06:47.583437 4619 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-79xv4"] Jan 26 12:06:47 crc kubenswrapper[4619]: E0126 12:06:47.584569 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb02a430-ba3d-458c-9187-59007a3d6a0a" containerName="extract-content" Jan 26 12:06:47 crc kubenswrapper[4619]: I0126 12:06:47.584586 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb02a430-ba3d-458c-9187-59007a3d6a0a" containerName="extract-content" Jan 26 12:06:47 crc kubenswrapper[4619]: E0126 12:06:47.584604 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16813fea-aa5b-45f3-8827-6994e1ffbedb" containerName="extract-utilities" Jan 26 12:06:47 crc kubenswrapper[4619]: I0126 12:06:47.584628 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="16813fea-aa5b-45f3-8827-6994e1ffbedb" containerName="extract-utilities" Jan 26 12:06:47 crc kubenswrapper[4619]: E0126 12:06:47.584644 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb02a430-ba3d-458c-9187-59007a3d6a0a" containerName="registry-server" Jan 26 12:06:47 crc kubenswrapper[4619]: I0126 12:06:47.584652 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb02a430-ba3d-458c-9187-59007a3d6a0a" containerName="registry-server" Jan 26 12:06:47 crc kubenswrapper[4619]: E0126 12:06:47.584669 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb02a430-ba3d-458c-9187-59007a3d6a0a" containerName="extract-utilities" Jan 26 12:06:47 crc kubenswrapper[4619]: I0126 12:06:47.584676 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb02a430-ba3d-458c-9187-59007a3d6a0a" containerName="extract-utilities" Jan 26 12:06:47 crc kubenswrapper[4619]: E0126 12:06:47.584692 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16813fea-aa5b-45f3-8827-6994e1ffbedb" containerName="registry-server" Jan 26 12:06:47 crc kubenswrapper[4619]: I0126 12:06:47.584702 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="16813fea-aa5b-45f3-8827-6994e1ffbedb" containerName="registry-server" Jan 26 12:06:47 crc kubenswrapper[4619]: E0126 12:06:47.584719 4619 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16813fea-aa5b-45f3-8827-6994e1ffbedb" containerName="extract-content" Jan 26 12:06:47 crc kubenswrapper[4619]: I0126 12:06:47.584726 4619 state_mem.go:107] "Deleted CPUSet assignment" podUID="16813fea-aa5b-45f3-8827-6994e1ffbedb" containerName="extract-content" Jan 26 12:06:47 crc kubenswrapper[4619]: I0126 12:06:47.584941 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="16813fea-aa5b-45f3-8827-6994e1ffbedb" containerName="registry-server" Jan 26 12:06:47 crc kubenswrapper[4619]: I0126 12:06:47.584959 4619 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb02a430-ba3d-458c-9187-59007a3d6a0a" containerName="registry-server" Jan 26 12:06:47 crc kubenswrapper[4619]: I0126 12:06:47.586761 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-79xv4" Jan 26 12:06:47 crc kubenswrapper[4619]: I0126 12:06:47.598463 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-79xv4"] Jan 26 12:06:47 crc kubenswrapper[4619]: I0126 12:06:47.710114 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a9d916f-e62c-48c1-a22e-dc4e0f2ba74a-catalog-content\") pod \"redhat-operators-79xv4\" (UID: \"0a9d916f-e62c-48c1-a22e-dc4e0f2ba74a\") " pod="openshift-marketplace/redhat-operators-79xv4" Jan 26 12:06:47 crc kubenswrapper[4619]: I0126 12:06:47.710175 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgqnp\" (UniqueName: \"kubernetes.io/projected/0a9d916f-e62c-48c1-a22e-dc4e0f2ba74a-kube-api-access-fgqnp\") pod \"redhat-operators-79xv4\" (UID: \"0a9d916f-e62c-48c1-a22e-dc4e0f2ba74a\") " pod="openshift-marketplace/redhat-operators-79xv4" Jan 26 12:06:47 crc kubenswrapper[4619]: I0126 12:06:47.710391 4619 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a9d916f-e62c-48c1-a22e-dc4e0f2ba74a-utilities\") pod \"redhat-operators-79xv4\" (UID: \"0a9d916f-e62c-48c1-a22e-dc4e0f2ba74a\") " pod="openshift-marketplace/redhat-operators-79xv4" Jan 26 12:06:47 crc kubenswrapper[4619]: I0126 12:06:47.812413 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a9d916f-e62c-48c1-a22e-dc4e0f2ba74a-catalog-content\") pod \"redhat-operators-79xv4\" (UID: \"0a9d916f-e62c-48c1-a22e-dc4e0f2ba74a\") " pod="openshift-marketplace/redhat-operators-79xv4" Jan 26 12:06:47 crc kubenswrapper[4619]: I0126 12:06:47.812466 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgqnp\" (UniqueName: \"kubernetes.io/projected/0a9d916f-e62c-48c1-a22e-dc4e0f2ba74a-kube-api-access-fgqnp\") pod \"redhat-operators-79xv4\" (UID: \"0a9d916f-e62c-48c1-a22e-dc4e0f2ba74a\") " pod="openshift-marketplace/redhat-operators-79xv4" Jan 26 12:06:47 crc kubenswrapper[4619]: I0126 12:06:47.812523 4619 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a9d916f-e62c-48c1-a22e-dc4e0f2ba74a-utilities\") pod \"redhat-operators-79xv4\" (UID: \"0a9d916f-e62c-48c1-a22e-dc4e0f2ba74a\") " pod="openshift-marketplace/redhat-operators-79xv4" Jan 26 12:06:47 crc kubenswrapper[4619]: I0126 12:06:47.813003 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a9d916f-e62c-48c1-a22e-dc4e0f2ba74a-utilities\") pod \"redhat-operators-79xv4\" (UID: \"0a9d916f-e62c-48c1-a22e-dc4e0f2ba74a\") " pod="openshift-marketplace/redhat-operators-79xv4" Jan 26 12:06:47 crc kubenswrapper[4619]: I0126 12:06:47.813091 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a9d916f-e62c-48c1-a22e-dc4e0f2ba74a-catalog-content\") pod \"redhat-operators-79xv4\" (UID: \"0a9d916f-e62c-48c1-a22e-dc4e0f2ba74a\") " pod="openshift-marketplace/redhat-operators-79xv4" Jan 26 12:06:47 crc kubenswrapper[4619]: I0126 12:06:47.829836 4619 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgqnp\" (UniqueName: \"kubernetes.io/projected/0a9d916f-e62c-48c1-a22e-dc4e0f2ba74a-kube-api-access-fgqnp\") pod \"redhat-operators-79xv4\" (UID: \"0a9d916f-e62c-48c1-a22e-dc4e0f2ba74a\") " pod="openshift-marketplace/redhat-operators-79xv4" Jan 26 12:06:47 crc kubenswrapper[4619]: I0126 12:06:47.913752 4619 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-79xv4" Jan 26 12:06:48 crc kubenswrapper[4619]: I0126 12:06:48.544490 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-79xv4"] Jan 26 12:06:48 crc kubenswrapper[4619]: I0126 12:06:48.629513 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79xv4" event={"ID":"0a9d916f-e62c-48c1-a22e-dc4e0f2ba74a","Type":"ContainerStarted","Data":"0c9b2827483adbb873bf11ec26fe24e24e050f83b3dca96f1cc15919181ceec3"} Jan 26 12:06:49 crc kubenswrapper[4619]: I0126 12:06:49.644979 4619 generic.go:334] "Generic (PLEG): container finished" podID="0a9d916f-e62c-48c1-a22e-dc4e0f2ba74a" containerID="4f959ac52f2d56bf30157f947378cc56edece7675a982e990b8a207df40849ce" exitCode=0 Jan 26 12:06:49 crc kubenswrapper[4619]: I0126 12:06:49.645267 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79xv4" event={"ID":"0a9d916f-e62c-48c1-a22e-dc4e0f2ba74a","Type":"ContainerDied","Data":"4f959ac52f2d56bf30157f947378cc56edece7675a982e990b8a207df40849ce"} Jan 26 12:06:49 crc kubenswrapper[4619]: I0126 12:06:49.649505 4619 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 12:06:57 crc kubenswrapper[4619]: I0126 12:06:57.733453 4619 generic.go:334] "Generic (PLEG): container finished" podID="2544c704-5423-4b36-b2e2-f98cf6306451" containerID="c06904f48e7b7122625f9272ee276e597a611ad12e4cecd2c85c1b2c05d4aaca" exitCode=0 Jan 26 12:06:57 crc kubenswrapper[4619]: I0126 12:06:57.733536 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w267c/must-gather-lpnxr" event={"ID":"2544c704-5423-4b36-b2e2-f98cf6306451","Type":"ContainerDied","Data":"c06904f48e7b7122625f9272ee276e597a611ad12e4cecd2c85c1b2c05d4aaca"} Jan 26 12:06:57 crc kubenswrapper[4619]: I0126 12:06:57.734710 4619 scope.go:117] "RemoveContainer" containerID="c06904f48e7b7122625f9272ee276e597a611ad12e4cecd2c85c1b2c05d4aaca" Jan 26 12:06:58 crc kubenswrapper[4619]: I0126 12:06:58.375051 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-w267c_must-gather-lpnxr_2544c704-5423-4b36-b2e2-f98cf6306451/gather/0.log" Jan 26 12:07:03 crc kubenswrapper[4619]: I0126 12:07:03.793660 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79xv4" event={"ID":"0a9d916f-e62c-48c1-a22e-dc4e0f2ba74a","Type":"ContainerStarted","Data":"ee58ffe9f2a2af082ce680fb37f0c465ef963b666c6bbfd315871cdf147e1279"} Jan 26 12:07:04 crc kubenswrapper[4619]: E0126 12:07:04.157605 4619 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.69:57982->38.102.83.69:40211: write tcp 38.102.83.69:57982->38.102.83.69:40211: write: connection reset by peer Jan 26 12:07:05 crc kubenswrapper[4619]: I0126 12:07:05.812351 4619 generic.go:334] "Generic (PLEG): container finished" podID="0a9d916f-e62c-48c1-a22e-dc4e0f2ba74a" containerID="ee58ffe9f2a2af082ce680fb37f0c465ef963b666c6bbfd315871cdf147e1279" exitCode=0 Jan 26 12:07:05 crc kubenswrapper[4619]: I0126 12:07:05.812569 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79xv4" event={"ID":"0a9d916f-e62c-48c1-a22e-dc4e0f2ba74a","Type":"ContainerDied","Data":"ee58ffe9f2a2af082ce680fb37f0c465ef963b666c6bbfd315871cdf147e1279"} Jan 26 12:07:09 crc kubenswrapper[4619]: I0126 12:07:09.850157 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79xv4" event={"ID":"0a9d916f-e62c-48c1-a22e-dc4e0f2ba74a","Type":"ContainerStarted","Data":"08f801340e97734be598b358dcd829ac2b9baf67c6b03e6574382a0c55d39c6d"} Jan 26 12:07:09 crc kubenswrapper[4619]: I0126 12:07:09.874762 4619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-79xv4" podStartSLOduration=3.198649234 podStartE2EDuration="22.874745019s" podCreationTimestamp="2026-01-26 12:06:47 +0000 UTC" firstStartedPulling="2026-01-26 12:06:49.6490798 +0000 UTC m=+4308.683120516" lastFinishedPulling="2026-01-26 12:07:09.325175585 +0000 UTC m=+4328.359216301" observedRunningTime="2026-01-26 12:07:09.866676516 +0000 UTC m=+4328.900717232" watchObservedRunningTime="2026-01-26 12:07:09.874745019 +0000 UTC m=+4328.908785735" Jan 26 12:07:10 crc kubenswrapper[4619]: I0126 12:07:10.291832 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-w267c/must-gather-lpnxr"] Jan 26 12:07:10 crc kubenswrapper[4619]: I0126 12:07:10.292342 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-w267c/must-gather-lpnxr" podUID="2544c704-5423-4b36-b2e2-f98cf6306451" containerName="copy" containerID="cri-o://59adf29c735904c64085d15c171cbf6e49b9e3a770671dbdf261c8433e13e56d" gracePeriod=2 Jan 26 12:07:10 crc kubenswrapper[4619]: I0126 12:07:10.307922 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-w267c/must-gather-lpnxr"] Jan 26 12:07:10 crc kubenswrapper[4619]: I0126 12:07:10.859556 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-w267c_must-gather-lpnxr_2544c704-5423-4b36-b2e2-f98cf6306451/copy/0.log" Jan 26 12:07:10 crc kubenswrapper[4619]: I0126 12:07:10.860025 4619 generic.go:334] "Generic (PLEG): container finished" podID="2544c704-5423-4b36-b2e2-f98cf6306451" containerID="59adf29c735904c64085d15c171cbf6e49b9e3a770671dbdf261c8433e13e56d" exitCode=143 Jan 26 12:07:11 crc kubenswrapper[4619]: I0126 12:07:11.531936 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-w267c_must-gather-lpnxr_2544c704-5423-4b36-b2e2-f98cf6306451/copy/0.log" Jan 26 12:07:11 crc kubenswrapper[4619]: I0126 12:07:11.532335 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w267c/must-gather-lpnxr" Jan 26 12:07:11 crc kubenswrapper[4619]: I0126 12:07:11.646759 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8fwh\" (UniqueName: \"kubernetes.io/projected/2544c704-5423-4b36-b2e2-f98cf6306451-kube-api-access-n8fwh\") pod \"2544c704-5423-4b36-b2e2-f98cf6306451\" (UID: \"2544c704-5423-4b36-b2e2-f98cf6306451\") " Jan 26 12:07:11 crc kubenswrapper[4619]: I0126 12:07:11.647003 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2544c704-5423-4b36-b2e2-f98cf6306451-must-gather-output\") pod \"2544c704-5423-4b36-b2e2-f98cf6306451\" (UID: \"2544c704-5423-4b36-b2e2-f98cf6306451\") " Jan 26 12:07:11 crc kubenswrapper[4619]: I0126 12:07:11.664874 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2544c704-5423-4b36-b2e2-f98cf6306451-kube-api-access-n8fwh" (OuterVolumeSpecName: "kube-api-access-n8fwh") pod "2544c704-5423-4b36-b2e2-f98cf6306451" (UID: "2544c704-5423-4b36-b2e2-f98cf6306451"). InnerVolumeSpecName "kube-api-access-n8fwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:07:11 crc kubenswrapper[4619]: I0126 12:07:11.749058 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8fwh\" (UniqueName: \"kubernetes.io/projected/2544c704-5423-4b36-b2e2-f98cf6306451-kube-api-access-n8fwh\") on node \"crc\" DevicePath \"\"" Jan 26 12:07:11 crc kubenswrapper[4619]: I0126 12:07:11.803436 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2544c704-5423-4b36-b2e2-f98cf6306451-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "2544c704-5423-4b36-b2e2-f98cf6306451" (UID: "2544c704-5423-4b36-b2e2-f98cf6306451"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:07:11 crc kubenswrapper[4619]: I0126 12:07:11.850393 4619 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2544c704-5423-4b36-b2e2-f98cf6306451-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 12:07:11 crc kubenswrapper[4619]: I0126 12:07:11.869366 4619 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-w267c_must-gather-lpnxr_2544c704-5423-4b36-b2e2-f98cf6306451/copy/0.log" Jan 26 12:07:11 crc kubenswrapper[4619]: I0126 12:07:11.870715 4619 scope.go:117] "RemoveContainer" containerID="59adf29c735904c64085d15c171cbf6e49b9e3a770671dbdf261c8433e13e56d" Jan 26 12:07:11 crc kubenswrapper[4619]: I0126 12:07:11.870747 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w267c/must-gather-lpnxr" Jan 26 12:07:11 crc kubenswrapper[4619]: I0126 12:07:11.889186 4619 scope.go:117] "RemoveContainer" containerID="c06904f48e7b7122625f9272ee276e597a611ad12e4cecd2c85c1b2c05d4aaca" Jan 26 12:07:13 crc kubenswrapper[4619]: I0126 12:07:13.273586 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2544c704-5423-4b36-b2e2-f98cf6306451" path="/var/lib/kubelet/pods/2544c704-5423-4b36-b2e2-f98cf6306451/volumes" Jan 26 12:07:14 crc kubenswrapper[4619]: I0126 12:07:14.234979 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:07:14 crc kubenswrapper[4619]: I0126 12:07:14.235038 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:07:17 crc kubenswrapper[4619]: I0126 12:07:17.914127 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-79xv4" Jan 26 12:07:17 crc kubenswrapper[4619]: I0126 12:07:17.914637 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-79xv4" Jan 26 12:07:17 crc kubenswrapper[4619]: I0126 12:07:17.969579 4619 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-79xv4" Jan 26 12:07:18 crc kubenswrapper[4619]: I0126 12:07:18.022237 4619 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-79xv4" Jan 26 12:07:18 crc kubenswrapper[4619]: I0126 12:07:18.612517 4619 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-79xv4"] Jan 26 12:07:18 crc kubenswrapper[4619]: I0126 12:07:18.784308 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lqtxs"] Jan 26 12:07:18 crc kubenswrapper[4619]: I0126 12:07:18.784986 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lqtxs" podUID="330030a7-d5b2-44ba-8612-30cd6ff41451" containerName="registry-server" containerID="cri-o://2b21c283e5375b1ebbde08fc468940ffa3ccfdc12de5ca78d5218cab1e589f23" gracePeriod=2 Jan 26 12:07:18 crc kubenswrapper[4619]: I0126 12:07:18.946713 4619 generic.go:334] "Generic (PLEG): container finished" podID="330030a7-d5b2-44ba-8612-30cd6ff41451" containerID="2b21c283e5375b1ebbde08fc468940ffa3ccfdc12de5ca78d5218cab1e589f23" exitCode=0 Jan 26 12:07:18 crc kubenswrapper[4619]: I0126 12:07:18.947657 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqtxs" event={"ID":"330030a7-d5b2-44ba-8612-30cd6ff41451","Type":"ContainerDied","Data":"2b21c283e5375b1ebbde08fc468940ffa3ccfdc12de5ca78d5218cab1e589f23"} Jan 26 12:07:19 crc kubenswrapper[4619]: I0126 12:07:19.346268 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lqtxs" Jan 26 12:07:19 crc kubenswrapper[4619]: I0126 12:07:19.441844 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvpbt\" (UniqueName: \"kubernetes.io/projected/330030a7-d5b2-44ba-8612-30cd6ff41451-kube-api-access-fvpbt\") pod \"330030a7-d5b2-44ba-8612-30cd6ff41451\" (UID: \"330030a7-d5b2-44ba-8612-30cd6ff41451\") " Jan 26 12:07:19 crc kubenswrapper[4619]: I0126 12:07:19.441880 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/330030a7-d5b2-44ba-8612-30cd6ff41451-catalog-content\") pod \"330030a7-d5b2-44ba-8612-30cd6ff41451\" (UID: \"330030a7-d5b2-44ba-8612-30cd6ff41451\") " Jan 26 12:07:19 crc kubenswrapper[4619]: I0126 12:07:19.442030 4619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/330030a7-d5b2-44ba-8612-30cd6ff41451-utilities\") pod \"330030a7-d5b2-44ba-8612-30cd6ff41451\" (UID: \"330030a7-d5b2-44ba-8612-30cd6ff41451\") " Jan 26 12:07:19 crc kubenswrapper[4619]: I0126 12:07:19.443996 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/330030a7-d5b2-44ba-8612-30cd6ff41451-utilities" (OuterVolumeSpecName: "utilities") pod "330030a7-d5b2-44ba-8612-30cd6ff41451" (UID: "330030a7-d5b2-44ba-8612-30cd6ff41451"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:07:19 crc kubenswrapper[4619]: I0126 12:07:19.457125 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/330030a7-d5b2-44ba-8612-30cd6ff41451-kube-api-access-fvpbt" (OuterVolumeSpecName: "kube-api-access-fvpbt") pod "330030a7-d5b2-44ba-8612-30cd6ff41451" (UID: "330030a7-d5b2-44ba-8612-30cd6ff41451"). InnerVolumeSpecName "kube-api-access-fvpbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:07:19 crc kubenswrapper[4619]: I0126 12:07:19.544809 4619 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvpbt\" (UniqueName: \"kubernetes.io/projected/330030a7-d5b2-44ba-8612-30cd6ff41451-kube-api-access-fvpbt\") on node \"crc\" DevicePath \"\"" Jan 26 12:07:19 crc kubenswrapper[4619]: I0126 12:07:19.544843 4619 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/330030a7-d5b2-44ba-8612-30cd6ff41451-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:07:19 crc kubenswrapper[4619]: I0126 12:07:19.579468 4619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/330030a7-d5b2-44ba-8612-30cd6ff41451-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "330030a7-d5b2-44ba-8612-30cd6ff41451" (UID: "330030a7-d5b2-44ba-8612-30cd6ff41451"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:07:19 crc kubenswrapper[4619]: I0126 12:07:19.646446 4619 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/330030a7-d5b2-44ba-8612-30cd6ff41451-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:07:19 crc kubenswrapper[4619]: I0126 12:07:19.962551 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqtxs" event={"ID":"330030a7-d5b2-44ba-8612-30cd6ff41451","Type":"ContainerDied","Data":"db1c34d323e4775aa65cf067c7a73cae46e3407f433c0856041b803a93998390"} Jan 26 12:07:19 crc kubenswrapper[4619]: I0126 12:07:19.962671 4619 scope.go:117] "RemoveContainer" containerID="2b21c283e5375b1ebbde08fc468940ffa3ccfdc12de5ca78d5218cab1e589f23" Jan 26 12:07:19 crc kubenswrapper[4619]: I0126 12:07:19.962587 4619 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lqtxs" Jan 26 12:07:19 crc kubenswrapper[4619]: I0126 12:07:19.993921 4619 scope.go:117] "RemoveContainer" containerID="8910fd7227232621d269b969684b356a022a9a5d916c00143c3d078f255c8e91" Jan 26 12:07:20 crc kubenswrapper[4619]: I0126 12:07:20.014697 4619 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lqtxs"] Jan 26 12:07:20 crc kubenswrapper[4619]: I0126 12:07:20.017075 4619 scope.go:117] "RemoveContainer" containerID="25711f0ff69248895bc623d4b0097d73bb0fb0beecdbd12e09214896b631f554" Jan 26 12:07:20 crc kubenswrapper[4619]: I0126 12:07:20.030024 4619 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lqtxs"] Jan 26 12:07:21 crc kubenswrapper[4619]: I0126 12:07:21.273961 4619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="330030a7-d5b2-44ba-8612-30cd6ff41451" path="/var/lib/kubelet/pods/330030a7-d5b2-44ba-8612-30cd6ff41451/volumes" Jan 26 12:07:34 crc kubenswrapper[4619]: I0126 12:07:34.003431 4619 scope.go:117] "RemoveContainer" containerID="2e514854a73ed04a41ffbfeaf4f168b252d5b2cbbafacab66f5892cc1139e6c0" Jan 26 12:07:34 crc kubenswrapper[4619]: I0126 12:07:34.040792 4619 scope.go:117] "RemoveContainer" containerID="45c6659d9aa4f663e09f560aee5df62456cdc29051cb4e8ee5e8bdaeb76bfb03" Jan 26 12:07:44 crc kubenswrapper[4619]: I0126 12:07:44.234844 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:07:44 crc kubenswrapper[4619]: I0126 12:07:44.235449 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:08:14 crc kubenswrapper[4619]: I0126 12:08:14.234085 4619 patch_prober.go:28] interesting pod/machine-config-daemon-28hd4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:08:14 crc kubenswrapper[4619]: I0126 12:08:14.234659 4619 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:08:14 crc kubenswrapper[4619]: I0126 12:08:14.234709 4619 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" Jan 26 12:08:14 crc kubenswrapper[4619]: I0126 12:08:14.235474 4619 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"74c245e303a38f7509d3bab338af309e8e53bf75399b4fafb1f17057fb0358b3"} pod="openshift-machine-config-operator/machine-config-daemon-28hd4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 12:08:14 crc kubenswrapper[4619]: I0126 12:08:14.235520 4619 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" containerName="machine-config-daemon" containerID="cri-o://74c245e303a38f7509d3bab338af309e8e53bf75399b4fafb1f17057fb0358b3" gracePeriod=600 Jan 26 12:08:14 crc kubenswrapper[4619]: E0126 12:08:14.357509 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 12:08:14 crc kubenswrapper[4619]: I0126 12:08:14.470153 4619 generic.go:334] "Generic (PLEG): container finished" podID="f33a41bb-6406-4c73-8024-4acd72817832" containerID="74c245e303a38f7509d3bab338af309e8e53bf75399b4fafb1f17057fb0358b3" exitCode=0 Jan 26 12:08:14 crc kubenswrapper[4619]: I0126 12:08:14.470197 4619 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" event={"ID":"f33a41bb-6406-4c73-8024-4acd72817832","Type":"ContainerDied","Data":"74c245e303a38f7509d3bab338af309e8e53bf75399b4fafb1f17057fb0358b3"} Jan 26 12:08:14 crc kubenswrapper[4619]: I0126 12:08:14.470229 4619 scope.go:117] "RemoveContainer" containerID="64350cc60f5e403d6072c0e2fa04966874fe67e1aef5a089878bd6d50d70e8ec" Jan 26 12:08:14 crc kubenswrapper[4619]: I0126 12:08:14.470852 4619 scope.go:117] "RemoveContainer" containerID="74c245e303a38f7509d3bab338af309e8e53bf75399b4fafb1f17057fb0358b3" Jan 26 12:08:14 crc kubenswrapper[4619]: E0126 12:08:14.471115 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 12:08:27 crc kubenswrapper[4619]: I0126 12:08:27.261176 4619 scope.go:117] "RemoveContainer" containerID="74c245e303a38f7509d3bab338af309e8e53bf75399b4fafb1f17057fb0358b3" Jan 26 12:08:27 crc kubenswrapper[4619]: E0126 12:08:27.261989 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 12:08:38 crc kubenswrapper[4619]: I0126 12:08:38.261913 4619 scope.go:117] "RemoveContainer" containerID="74c245e303a38f7509d3bab338af309e8e53bf75399b4fafb1f17057fb0358b3" Jan 26 12:08:38 crc kubenswrapper[4619]: E0126 12:08:38.262843 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 12:08:49 crc kubenswrapper[4619]: I0126 12:08:49.261164 4619 scope.go:117] "RemoveContainer" containerID="74c245e303a38f7509d3bab338af309e8e53bf75399b4fafb1f17057fb0358b3" Jan 26 12:08:49 crc kubenswrapper[4619]: E0126 12:08:49.262029 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 12:09:00 crc kubenswrapper[4619]: I0126 12:09:00.262071 4619 scope.go:117] "RemoveContainer" containerID="74c245e303a38f7509d3bab338af309e8e53bf75399b4fafb1f17057fb0358b3" Jan 26 12:09:00 crc kubenswrapper[4619]: E0126 12:09:00.262877 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 12:09:14 crc kubenswrapper[4619]: I0126 12:09:14.261374 4619 scope.go:117] "RemoveContainer" containerID="74c245e303a38f7509d3bab338af309e8e53bf75399b4fafb1f17057fb0358b3" Jan 26 12:09:14 crc kubenswrapper[4619]: E0126 12:09:14.262217 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 12:09:29 crc kubenswrapper[4619]: I0126 12:09:29.261467 4619 scope.go:117] "RemoveContainer" containerID="74c245e303a38f7509d3bab338af309e8e53bf75399b4fafb1f17057fb0358b3" Jan 26 12:09:29 crc kubenswrapper[4619]: E0126 12:09:29.262191 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 12:09:34 crc kubenswrapper[4619]: I0126 12:09:34.187963 4619 scope.go:117] "RemoveContainer" containerID="90e5e5a5f3e4091a02ca1d60232e313636707395538b07b5c0fd423f1ec0b773" Jan 26 12:09:34 crc kubenswrapper[4619]: I0126 12:09:34.352752 4619 scope.go:117] "RemoveContainer" containerID="8a4bd2dc4dd01554d9656163155f11de26bd91aa7f4fa82f95ec66a3229c70d7" Jan 26 12:09:34 crc kubenswrapper[4619]: I0126 12:09:34.368944 4619 scope.go:117] "RemoveContainer" containerID="4d7b10264e364d011d12de670d3f4a7eaa15bf5378ea3f53910e0d81b3d2067f" Jan 26 12:09:44 crc kubenswrapper[4619]: I0126 12:09:44.261781 4619 scope.go:117] "RemoveContainer" containerID="74c245e303a38f7509d3bab338af309e8e53bf75399b4fafb1f17057fb0358b3" Jan 26 12:09:44 crc kubenswrapper[4619]: E0126 12:09:44.262640 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 12:09:56 crc kubenswrapper[4619]: I0126 12:09:56.263178 4619 scope.go:117] "RemoveContainer" containerID="74c245e303a38f7509d3bab338af309e8e53bf75399b4fafb1f17057fb0358b3" Jan 26 12:09:56 crc kubenswrapper[4619]: E0126 12:09:56.264802 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 12:10:07 crc kubenswrapper[4619]: I0126 12:10:07.261962 4619 scope.go:117] "RemoveContainer" containerID="74c245e303a38f7509d3bab338af309e8e53bf75399b4fafb1f17057fb0358b3" Jan 26 12:10:07 crc kubenswrapper[4619]: E0126 12:10:07.262817 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 12:10:22 crc kubenswrapper[4619]: I0126 12:10:22.260800 4619 scope.go:117] "RemoveContainer" containerID="74c245e303a38f7509d3bab338af309e8e53bf75399b4fafb1f17057fb0358b3" Jan 26 12:10:22 crc kubenswrapper[4619]: E0126 12:10:22.261667 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 12:10:33 crc kubenswrapper[4619]: I0126 12:10:33.261338 4619 scope.go:117] "RemoveContainer" containerID="74c245e303a38f7509d3bab338af309e8e53bf75399b4fafb1f17057fb0358b3" Jan 26 12:10:33 crc kubenswrapper[4619]: E0126 12:10:33.262195 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 12:10:48 crc kubenswrapper[4619]: I0126 12:10:48.261150 4619 scope.go:117] "RemoveContainer" containerID="74c245e303a38f7509d3bab338af309e8e53bf75399b4fafb1f17057fb0358b3" Jan 26 12:10:48 crc kubenswrapper[4619]: E0126 12:10:48.261851 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 12:11:02 crc kubenswrapper[4619]: I0126 12:11:02.262112 4619 scope.go:117] "RemoveContainer" containerID="74c245e303a38f7509d3bab338af309e8e53bf75399b4fafb1f17057fb0358b3" Jan 26 12:11:02 crc kubenswrapper[4619]: E0126 12:11:02.262959 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 12:11:15 crc kubenswrapper[4619]: I0126 12:11:15.261257 4619 scope.go:117] "RemoveContainer" containerID="74c245e303a38f7509d3bab338af309e8e53bf75399b4fafb1f17057fb0358b3" Jan 26 12:11:15 crc kubenswrapper[4619]: E0126 12:11:15.261992 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 12:11:28 crc kubenswrapper[4619]: I0126 12:11:28.261589 4619 scope.go:117] "RemoveContainer" containerID="74c245e303a38f7509d3bab338af309e8e53bf75399b4fafb1f17057fb0358b3" Jan 26 12:11:28 crc kubenswrapper[4619]: E0126 12:11:28.262262 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" Jan 26 12:11:42 crc kubenswrapper[4619]: I0126 12:11:42.261264 4619 scope.go:117] "RemoveContainer" containerID="74c245e303a38f7509d3bab338af309e8e53bf75399b4fafb1f17057fb0358b3" Jan 26 12:11:42 crc kubenswrapper[4619]: E0126 12:11:42.261972 4619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28hd4_openshift-machine-config-operator(f33a41bb-6406-4c73-8024-4acd72817832)\"" pod="openshift-machine-config-operator/machine-config-daemon-28hd4" podUID="f33a41bb-6406-4c73-8024-4acd72817832" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515135655013024451 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015135655014017367 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015135643367016522 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015135643370015464 5ustar corecore